Kinect for Windows v2 First Look

Dec 04, 2013 by in Kinect, Lab, Technology

I’ve had a little less than a week to play with the new Kinect for Windows v2 so far, thanks to the developer preview program and the Kinect MVP program. So far it is everything Kinect developers and designers have been hoping for – full HD through the color camera and a much improved depth camera as well as USB 3.0 data throughput.

Additionally, much of the processing is now occurring on the GPU rather than the onboard chip or your computer’s CPU. While amazing things were possible with the first Kinect for Windows sensor, most developers found themselves pushing the performance envelope at times and wishing they could get just a little more resolution or just a little more data speed.  Now they will have both.

At this point the programming model has changed a bit between Kinect for Windows v1 and Kinect for Windows v2. While knowing the original SDK will definitely give you a leg up, a bit of work will still need to be done to port Kinect v1 apps to the new Kinect v2 SDK when it is eventually released.

What’s different between the new Kinect for XBox One and the Kinect for Windows v2?  It turns out not a lot. The Kinect for XBox has a special USB 3.0 adapter that draws both lots of power as well as data from the XBox One. Because it is a non-standard connector, it can’t be plugged straight into a PC (unlike with the original Kinect which had a standard USB 2.0 plug).

To make the new Kinect work with a PC, then, requires a special breakout board. This board serves as an adapter with three ports – one for the Kinect, one for a power source, finally one for a standard USB 3.0 cable.

We can also probably expect the firmware on the two versions of the new Kinect sensor to also diverge over time as occurred with the original Kinect.

Skeleton detection is greatly improved with the new Kinect.  Not only are more joints now detected, but many of the jitters developers became used to working around are now gone. The new SDK recognizes up to 6 skeletons rather than just two. Finally, because of the improved Time-of-Flight depth camera, which replaces the Primesense technology used in the previous hardware, the accuracy of the skeleton detection is much better and includes excellent hand detection.  Grip recognition as well as Lasso recognition (two fingers used to draw) are now available out of the box – even in this early alpha version of the SDK.

I won’t hesitate to say – even this early in the game – that the new hardware is amazing and is leaps and bounds better than the original sensor. The big question, though, is whether it will take off the way the original hardware did.

If you recall, when Microsoft released the first Kinect sensor they didn’t have immediate plans to use it for anything other than a game controller – no SDK, no motor controller, not a single luxury. Instead, creative developers, artists, researchers and hackers figured out ways to read the raw USB data and started manipulating it to create amazingly original applications that took advantage of the depth sensor – and they posted them to the Internet.

Will this happen the second time around?  Microsoft is endeavoring to do better this time by getting an SDK out much earlier. As I mentioned above, the alpha SDK for Kinect v2 is already available to people in the developer preview program. The trick will be in attracting the types of creative people that were drawn to the Kinect two years ago – the kind of creative technologists Microsoft has always had trouble attracting toward other products like Windows Phone and Windows tablets.

My colleagues and I at Razorfish Emerging Experiences are currently working on combining the new Kinect with other technologies such as Oculus Rift, Google Glass, Unity 3D, Cinder, Leap Motion and 4K video. Like a modern day scrying device (or simply a mad scientist’s experiment) we hoping that by simply mixing all these gadgets together we’ll get a glimpse at what the future looks like and, perhaps, even help to create that future.


Taking a look at Chromecast

Aug 17, 2013 by in Technology

The box that the Chromecast is packaged in

Chromecast Packaging

I was curious about the Google Chromecast, a tiny media streaming accessory for one’s TV. At $35 USD it’s an inexpensive device compared to other streaming devices and, at that price, I knew it wouldn’t bring on a case of buyer’s regret if I didn’t like it. I ordered one and a few weeks later it showed up at my door.  I already have several devices in my house that are used for streaming media; this had set a frame of reference for me about what’s involved in setting up a new streaming device. The setup process for the Chromecast bypassed several of the steps that I had been expecting; in fact, the process was so simple, I’d be comfortable with letting my parents set up the device on their own. The first impressions were delightful.

The inner cover of the Chromecast has the instructions listed in three steps.

Inner cover of the Chromecast packaging, showing the instructions.

Inner cover with instructions

From having setup my Apple TVs, Xboxes, Roku, and several other devices for streaming I expected Step 3 to expand to a multi-step process of entering account names, passwords, or going through some authorization website to enter some code that was displayed on the screen.  The device doesn’t have a remote. I knew that it could be controlled with a mobile device so I also expected a pairing process that would allow the mobile device to be used for text entry.  This isn’t what the experience was like at all.  Navigating to the step-3 URL on an android tablet took me to the Android Marketplace page for the Chromecast application. Once it was installed it discovered the Chromecast and gave me a yes/no prompt on whether a code displayed on the TV was the same as the code being displayed on my tablet.  After affirming the tablet prompted me to select a Wi-Fi access point and enter the encryption key.  Setup was complete.

Initially I thought I had missed a step or that something might have gone wrong and I wasn’t prompted for the other information that I needed to enter.  Streaming device setup hasn’t been this easy on other devices that I’ve used.  I started Netflix on my tablet and it displayed a new button that had not been there before; an icon that looked like a screen combined with part of the icon for Wi-Fi. I pressed the button and started to play a video. When my TV started displaying an episode of Arrested Development I got my confirmation that setup had actually completed successfully.  I started the music application on the tablet and the same new icon was present. I started to play a song and pressed the button and the TV was playing the music.  Cool! It worked! I picked up my sister’s phone to see if anything were necessary to allow it to work with the Chromecast. Nothing needed to be done, it was already displaying the streaming buttons.

Start Casting Button

The button that displays in applications that support the Chromecast

The Chromecast also allows mirroring of the contents of the Chrome browser if the Chromecast extension has been installed from the web store. It only renders up to 720 lines of resolution. If the browser is larger than this the view on the television will be scaled down. There’s a slight delay between an update of the contents of the browser and that update being reflected on the television. The delay is small enough to ignore during general sharing. I tried to use the extension to share the contents of another site (Hulu) and found that this doesn’t work. Trying to share a a video this way crashed the extension.

Chrome browser mirroring its display to a TV through the Chromecast.

Mirroring the contents of the Chrome browser to a TV with Chromecast.

The initial experience was great, and that’s important. After playing with it some more I did come across a few stumbling blocks. The device is made to stream content that is coming from an online account, not from music that one has on their own network. I already have most of my music uploaded into my Google account. But if there was a song that was both in my online account and saved on the device (for those occasions when I have no network connection) the song won’t play. For more reliable song play it was necessary to eject the memory card containing my music.

Unable to play the song. Can't play a sideloaded song remotely.

Even if a song is in your online account if the song is saved on your device it won’t play.

The Chromecast has it’s own place among the other streaming devices.  The other streaming devices I use are capable of streaming from more sources than the Chromecast does (Hulu, Amazon Video, content shared on my home network) but the Chromecast concentrates on a smaller set of use cases that I think will fit the needs of most. Comparing the Chromecast to some of the other stream devices is like comparing a multi-tool to a knife. Yes, one has more potential uses. But sometimes you only need a knife.

Multi-purpose tool vs. knife

Go for more capabilities or less capabilities but more fitting to the general case?

Costing a third to one-tenth of the other streaming solutions available and the small form factor make the device a good solution for adding streaming capabilities to all of the TVs in a house or for taking a movie to a friend’s house to watch. I’ve ordered a few more as gifts to friends and family that don’t currently have any devices for streaming to the TV.  It won’t replace my other streaming devices because they offer access to some other services that I use. But the device is here to stay.


Getting Started with Arduino and 3D Printing

Aug 01, 2013 by in 3D Scanning, Technology

 

I’ve been making the most of a must-have set of tools for anyone that is engineering their own solutions: The Arduino, Netduino, and a 3D printer. This journey started a few months ago when an unexpected hardware failure occurred at a time when I was away from home, after the hours, and too late at night to purchase replacement hardware. Even worse, it was the night before I needed to have a device up and running for a client presentation. Someone handed me an Arduino telling me I could resolve the problem with it. Within a few hours I had an understanding of how the Arduino worked and how to use it to make a solution.

The Arduino is a single board computer based on the 32-bit Amtel ARM processor. The development environment for it is free (C++ based) It has the support of a large community of hardware and software developers ranging from professionals to hobbiest. From the software side there’s a number of existing solution components that are included in the development software or are available for download. From the hardware side you’ll find a wide range of hardware additions and accessories that can be added to a solution by plugging them into the Arduino. Netduino is the same concept as Arduino but based on the .Net Micro Framework with C# as the primary development language and works with many of the same accessories as the Arduino.

A background in electronics isn’t needed to get started with the Arduino. In browsing through my local electronics store some of the immediate accessories I see include motor controllers, network adapters, a cellular modem, relays for controlling high voltage appliances, and more. In many cases making use of these accessories involves plugging them in and downloading their code libraries. Having some knowledge in electronics does bring the advantage of being able to interact with devices for which immediate solutions might not be available.

After creating a few projects with these devices my way of viewing problems has changed. A simple real world problem I encountered involves a family member who is usually on a different level of the house than the one the door bell is on. She would, consequently, often not hear when someone was at the door. Previously my solution to this would have begun and ended with adding a second door bell to the other level of the house. I went a step further. Knowing that this family member is attentive to incoming messages on her phone, I connected a Netduino Plus to the doorbell so that when some one rings the bell, an SMS is also sent to her phone. Less than a dollar in additional parts was needed to do this.

In developing another solution based on the Netduino and Arduino, I needed to add some electro-mechanical parts such as servos and stepping motors that would be driving gears and belts. Most of the parts that were needed were ordered. Some of the other gears and pulleys were out of stock; the expected availability was about a week out. On the same day that I was making this order our 3D printer arrived. Problem solved! Rather than wait for the parts I needed to come back in stock, I could print them out on the 3D printer!

In talking to friends and family about 3D printing I’ve found this is a topic that sometimes demands an explanation.

3D printers produce solid objects from designs. There are several techniques that a 3D printer can use for doing this. The technique used by our printer is called “extrusion depositing.” Layer by layer the printer draws slices of a 3D object on top of each other, stacking them until the object is complete. Contrary to expectation, it is possible to print out objects with moving parts. It’s possible to go directly from designing an object to directly manufacturing it.

Since I knew the specifications for the parts that I needed I was able to design the parts in the 3D modeling software and print them. Rather than wait a week for the parts to come back in stock and then a few more days for shipping and handling, the time involved was merely what it took to draw out the parts and then go off to do something else while the parts printed. Much like the Arduino, there is also a large community of people that are producing solutions available for download. Thingiverse.com and other sites allow members of the community to make their designs available to others to download and print for free. These designs include solutions to problems, toys, art, phone cases, and more.

My first experiences with the 3D printer and Arduino occurred because the other solution that I normally would have gone with — ordering the parts I needed — was initially not available to me. After having a positive experience with the 3D printer, the way in which I’ll try to find solutions for future needs may change from immediately trying to purchase a solution to looking to see if I can download or create a 3D design. As 3D printers become more affordable in the future I can see this having an impact on how people go about finding the things that they need.


Leap Motion Unboxing

Jul 29, 2013 by in Lab, Technology

No editorializing; just showing our initial experience with the gestural device that fits in the palm of your hand.


KinectiChord: Touch Technology Like Never Before

Jun 18, 2013 by in Experience Design, Kinect, Microsoft Kinect, Technology

This week at the Cannes Lions International Festival of Creativity, we debuted our latest creation—KinectiChord: a multiuser, multisensory experience that blends physical and digital in an unexpected and delightful way. On display in the Microsoft Advertising Beach Club, this experience allows multiple users to see, hear and feel technology like never before.


Creativity on the Vine

Jun 07, 2013 by in News

At our recent Razorfish Client Summit in Las Vegas, Emerging Experiences asked our fellow fish to submit 6 seconds of creativity for our clients and partners to vote on within “The Lab”. Utilizing the Vine application, our employees around the world created 6 second shorts that expressed their creative side. We then fed specifically tagged Vine videos (#razorfishcreativity) into a unique multitouch experience for our clients and partners to vote on. Vine doesn’t have an official API, so we had to get a bit “creative” on the technology side to pull this off.

What were the results? Well, lets just say the entries were varied… From stop-motion Rubik’s Cubes, to creative mashups of Bob Lord over lunch, each entry showcased individuality and our collective creative spirit. Overall our clients loved the experience and their ability to vote on their favorites. So which fish won?

(insert drumroll here)

1st Place: Moritz Bosselmann
View Entry

2nd Place: Matisse Miller
View Entry

3rd Place: Tomoko Fushimi-Haack
View Entry

Who said it doesn’t pay to be creative? Thank you to all the fish that entered and to our clients and partners that participated in the experience.

Tags:

Adweek Feature Story: Emerging Experiences goes coast-to-coast with a new Lab in San Francisco.

Jun 04, 2013 by in Lab, News

 

Razorfish Emerging Experiences has opened a new lab in our San Francisco Razorfish office across from Pier 39 and the renowned Fisherman’s Wharf. Equal parts workspace and client demonstration area, the lab is invaluable for our team to design, build and test some of the most engaging and transformational experiences in the marketplace. Leveraging the success of our Atlanta Lab and its evolution over the past 5 years, the San Francisco Lab is our newest digital sandbox.

In two related articles referencing the work in our new Lab, Christopher Heine writes in Adweek that “employing the latest technology at point of sale is nothing new—for years businesses from car rental companies to Nordstrom department stores have unhooked from the wires. But the trend has gone from merely ringing up sales via mobile devices to a deeply immersive in-store experience—fully digitized but crucially featuring that face-to-face element…”

The Lab showcases 360 degree video content across multiple displays and projection surfaces, features emerging technologies such as transparent displays and multi-touch and gesture-based sensors powered by our proprietary Razorfish 5D Platform. Watch an Audi be configured in precise detail through the application that empowers Audi City or sit back and watch as data is visualized through one of our latest projects.

Physical meets digital and the customer’s journey will never be the same. Innovating Tomorrow, Today. For appointments please contact Wade Forst (wade.forst@razorfish.com), our Director of Emerging Experiences in San Francisco.


From The Razorfish Client Summit: An Interview with Jonathan Hull and Jeremy Lockhorn

May 14, 2013 by in News


Emerging Experiences Lab at Converge: Razorfish Client Summit 2013

May 01, 2013 by in Advertising, Experience Design, Lab, News, Technology

Take a look behind the scenes of the Lab at the ARIA in Las Vegas. A true manifestation of what we do in the Emerging Experiences group, the Lab set-up brings to life the ideas behind this year’s Client Summit theme, Convergence. To learn more about the ideas that drive our passions, read more about what Razorfish’s Global CEO, Bob Lord and Global CTO, Ray Velez have to say in their new book.

Tags:

Razorfish Emerging Experiences on Display at Publicis Groupe’s Investor Day

Apr 23, 2013 by in News

At the London Investor Day, Publicis Groupe’s top executives showcased market-defining tools that have been developed to enable clients to communicate more effectively with their consumers. Included was the Razorfish Emerging Experiences group and its 5D platform. Attendees were introduced to it (pictured above) during a presentation of Audi City, a next-generation dealership built with 5D.