Tag: Kinect

Kinect for Windows v2 First Look

Dec 04, 2013 by in Kinect, Lab, Technology

I’ve had a little less than a week to play with the new Kinect for Windows v2 so far, thanks to the developer preview program and the Kinect MVP program. So far it is everything Kinect developers and designers have been hoping for – full HD through the color camera and a much improved depth camera as well as USB 3.0 data throughput.

Additionally, much of the processing is now occurring on the GPU rather than the onboard chip or your computer’s CPU. While amazing things were possible with the first Kinect for Windows sensor, most developers found themselves pushing the performance envelope at times and wishing they could get just a little more resolution or just a little more data speed.  Now they will have both.

At this point the programming model has changed a bit between Kinect for Windows v1 and Kinect for Windows v2. While knowing the original SDK will definitely give you a leg up, a bit of work will still need to be done to port Kinect v1 apps to the new Kinect v2 SDK when it is eventually released.

What’s different between the new Kinect for XBox One and the Kinect for Windows v2?  It turns out not a lot. The Kinect for XBox has a special USB 3.0 adapter that draws both lots of power as well as data from the XBox One. Because it is a non-standard connector, it can’t be plugged straight into a PC (unlike with the original Kinect which had a standard USB 2.0 plug).

To make the new Kinect work with a PC, then, requires a special breakout board. This board serves as an adapter with three ports – one for the Kinect, one for a power source, finally one for a standard USB 3.0 cable.

We can also probably expect the firmware on the two versions of the new Kinect sensor to also diverge over time as occurred with the original Kinect.

Skeleton detection is greatly improved with the new Kinect.  Not only are more joints now detected, but many of the jitters developers became used to working around are now gone. The new SDK recognizes up to 6 skeletons rather than just two. Finally, because of the improved Time-of-Flight depth camera, which replaces the Primesense technology used in the previous hardware, the accuracy of the skeleton detection is much better and includes excellent hand detection.  Grip recognition as well as Lasso recognition (two fingers used to draw) are now available out of the box – even in this early alpha version of the SDK.

I won’t hesitate to say – even this early in the game – that the new hardware is amazing and is leaps and bounds better than the original sensor. The big question, though, is whether it will take off the way the original hardware did.

If you recall, when Microsoft released the first Kinect sensor they didn’t have immediate plans to use it for anything other than a game controller – no SDK, no motor controller, not a single luxury. Instead, creative developers, artists, researchers and hackers figured out ways to read the raw USB data and started manipulating it to create amazingly original applications that took advantage of the depth sensor – and they posted them to the Internet.

Will this happen the second time around?  Microsoft is endeavoring to do better this time by getting an SDK out much earlier. As I mentioned above, the alpha SDK for Kinect v2 is already available to people in the developer preview program. The trick will be in attracting the types of creative people that were drawn to the Kinect two years ago – the kind of creative technologists Microsoft has always had trouble attracting toward other products like Windows Phone and Windows tablets.

My colleagues and I at Razorfish Emerging Experiences are currently working on combining the new Kinect with other technologies such as Oculus Rift, Google Glass, Unity 3D, Cinder, Leap Motion and 4K video. Like a modern day scrying device (or simply a mad scientist’s experiment) we hoping that by simply mixing all these gadgets together we’ll get a glimpse at what the future looks like and, perhaps, even help to create that future.


The Presence of Technology

Nov 07, 2012 by in Microsoft Kinect, News, Technology

 

At the same time the //Build/ conference was going down in Redmond, Washington, I was next door in Seattle for the Seattle Interactive Conference (SIC://). Besides a fondness for forward slashes, these two conferences shared a common interest in the future of technology. //Build approached this topic from the software side while SIC:// did it from the design and agency side. The Kinect for Windows technology, interestingly, was present at both events.

I was invited to SIC:// in order to represent EE on a panel about Natural User Interfaces. It was an amazing panel that included David Kung from Oblong, Matt von Trott from Assembly Ltd, Scott Snibbe from Snibbe Interactive and John Gaeta of FLOAT Hybrid. Our conversation about what NUI means today was preceded by an amazing fifteen minute talk by Oscar Murillo that showed off many K4W techniques in a holodeck-like demo. You can read more about the panel here and here. It was expertly moderated by Steve Clayton of Microsoft.

What made the event fascinating for me was the time I got to spend with the other panelists before our talk and after. There was a clear trajectory in our backgrounds. John is involved in the motion picture industry and helped design many of the futuristic movies (like The Matrix) that have inspired the rest of us to work with bleeding-edge interface technology. Dave’s company brought forward advanced academic research to actually realize Minority Report (one of Oblong’s founders helped design the gestural interface Tom Cruise uses in the movie). Microsoft turned gestural interfaces into a consumer technology. Matt, Scott and I are using it for retail and marketing which will help fund and expand the proliferation of gestural sensors. Our collective goal is to create technology that anticipates and responds to our desires rather than simply frustrating us on most days.

We want to use technology, when it comes down to it, to hide the presence of technology in our everyday lives.


5D at Oracle OpenWorld

Oct 30, 2012 by in 5D, Microsoft Kinect, Microsoft Surface, Touchscreen


In early October, the Emerging Experiences practice’s San Francisco office brought our Razorfish 5D retail platform to Oracle OpenWorld. Within this global event was the first ever Customer Experience Summit. This event gathered industry leaders together to discuss strategies for driving customer-centric initiatives while interacting with some of the most future-forward experiences and minds.

Emerging Experiences set up our Razorfish 5D retail experience in beautiful Union Square park. We demonstrated how a seamless customer journey can cross over touch tables, gestural sensors, digital screens, tablets and mobile apps to transform the retail experience.

The 5D installation for Oracle CX showed how each element of the contemporary brick-and-mortar store can be enhanced and streamlined. Digital displays, smartphones and HD touch tables communicated with each other to provide infinite shelves as well as an immersive experience to tell the stories behind the store brands.

Tablet software provided store associates with the opportunity to not only help shoppers select items, but even interact with their customer’s smartphones. The 5D retail experience also demonstrated how virtual dressing rooms with augmented reality can enhance the retail experience. Each of these touch-points in turn generates massive amounts of data about the sales process.

Sharing our retail story with the attendees at the Oracle Customer Experience Summit was both extremely rewarding and entertaining. We look forward to returning next year.


The Science of the Perfect Fit

Jan 10, 2012 by in Experience Design, Microsoft Kinect, Portfolio, Technology

We recently partnered with London-based technology company, Bodymetrics, to develop a means for online shoppers to buy clothes from the comfort of their couch. Whattya mean big deal? Well, did we mention that the clothes are guaranteed to fit?

Yup, thanks to BodyMetrics’ 3D body-scanning technology, which is based off of the same PrimeSense scanners and camera tech as the Microsoft Kinect, shoppers are able to have their body dimensions scanned in and saved to an online profile. Just think of it like the transport room in Star Trek … if Scotty had a bit of an online shopping problem.

Once users have created their profile and saved their body data, they can virtually try on a wide range of clothing types such as jeans, dresses, skirts and tops from tons of partner retailers. As each piece of clothing is mapped to the on-screen avatar’s body, the user is able to see the exact fit of the item thanks to a visual overlay that depicts the tight spots of the garment. No more guessing games when you buy that pair of jeans online – you get the perfect fit, every time.

The icing on the cake – retailers get to benefit from a drastic drop in their store return rates since their customers can finally purchase with confidence. That, coupled with the exponential momentum and increased basket-size of eCommerce purchases means great things for apparel companies. Plus, you don’t have to listen to some phony sales associate squawking about how fabulous you look in those jeans – just take a look for yourself!


Getting Up Close to the Kinect

Nov 03, 2011 by in Microsoft Kinect, Technology

As we approach the one year anniversary of the Kinect launch, Microsoft has announced that the Kinect for PC Commercial SDK will be released in early 2012 (http://majornelson.com/2011/10/31/xbox-360-celebrates-one-year-anniversary-of-the-kinect-effect/). More than 200 businesses worldwide, including Toyota, Houghton Mifflin Harcourt and Razorfish, are involved in a pilot program to explore the commercial possibilities of the Kinect.

Until now, most companies working with the Kinect have been working within the constraints of a research license for the Kinect SDK. Consequently the applications that corporations have been working on have been restricted to tightly held private projects or, at most, proof-of-concept projects visible only as demo reels on the Internet. While most people are at least aware of the Kinect technology, the terms of the research license has relegated it to being an afterthought or something only understood at a distance – a nice to have.

The recent announcement of the timeline for the commercial license implicitly green lights these projects to make preparations for releasing Kinect-enabled applications for everyday use. Over the next year we can expect to see the Kinect as a ubiquitous part of our daily environments and something just as prevalent as interactive kiosks are today. The spread of the Kinect beyond the living room may be as dramatic as the proliferation of smart phones or tablets – one day no one knew what they were and, the next, everyone seemed to have one. In boardrooms across America, the question will no longer be one of whether to have a Kinect strategy but instead what that strategy is.

As the Kinect becomes more prevalent in our daily lives, the possibilities and limitations of the Kinect will undergo much closer scrutiny. The potential offered by a mass produced device that provides a video camera, an infrared depth camera and a four microphone array with beamforming capabilities is vast. The technology can be taken in multiple directions including computer vision in robotics, 3D modeling with multiple linked devices, inexpensive augmented reality, hands-free interactive experiences, speech recognition based in-store assistance and innovative computer assisted learning.

That Microsoft’s visionary strategy in designing the Kinect has revolved around off-loading processing to the operating system rather than building it solely into the hardware means that complex scenarios not currently supported by the Xbox can be made viable through improved software and processing power on computers and video cards, the price of which are constantly falling. Microsoft’s Kinect technology is actually scalable and does not require improving the Kinect hardware itself but, instead, on simply improving the software that processes the data streamed by the Kinect.

This all leads to the inevitable question – what is the future of the Kinect? After a year, what are second generation Kinect applications going to look like? The answer depends on where Microsoft takes Kinect software going forward. The current research version of the Kinect SDK beta shows its roots in gaming. The visual processing, depth processing and even acoustical models are tied to the limitations and optimizations required for the Xbox 360 gaming system. They all work best in a room about the size of your living room and even begin to have troubles in small apartments. The microphone array seems to work well in standard rooms, for which it has painstakingly been optimized to deal with surround sound speakers and audio reflections off of furniture, but appears to have trouble in large spaces.

Strikingly, even though the depth camera is capable of 640 x 480 resolutions, the current SDK only provides access to 320 x 240 image streams. The Kinect SDK, likewise, does not provide depth data information for objects within 800 mm (about 2 ½ feet) of the Kinect sensor even though the camera does capture this information.

There are clearly performance reasons for setting these limitations. However part of the problem also appears to be related to the fact that the USB connector for the Kinect is a bottleneck and has been throttled for the particular USB controller configuration requirements of the Xbox. As the Kinect moves out of the living room and into the real world, it makes sense to leave the restrictions imposed by tying the Kinect SDK to the Xbox behind. If we can use improved software running on improved hardware to boost the capabilities of Kinect for PC applications, it would be a shame to have a gaming infrastructure be the main showstopper.

Nowhere is this more clear than when we consider using the Kinect in the office. As a Kinect developer, I have to slide my chair back and away from my monitor whenever I want to debug a piece of code. Fortunately I don’t work in a cubicle and have some open space behind me. I am also fortunate that my chair has wheels and I have the code – slide – code routine down pat. However I don’t see anyone wanting to use a Kinect-enabled business application in this way. Unlike the living room, which is the natural space of our home lives, the office environment of our work lives is generally cramped and close to the screen with just enough room for a keyboard between us and our monitor. We are always within two and a half feet of the objects we work with.

Yet the workspace is one of the chief places we want to see our Kinects working. And instead of large arm movements, we would like to wave our hands or snap our fingers in order to make things happen on our screens. We want The Minority Report writ small. In order to achieve this, in turn, we need to move beyond skeletal tracking and start enabling fine finger tracking.

Along the same lines, for larger movements, the skeletal tracking capabilities of the Kinect only work with the full body. At the office, sitting in our office chairs, we typically never see anything below the waist. Even skeletal tracking, then, needs to be modified to take this into account and to support partial skeleton tracking at the software level.

As the Kinect is being allowed to travel beyond our living rooms with the upcoming release of the commercial Kinect SDK, the software that allows developers to build applications for the Kinect needs to cut its strong dependence on gaming scenarios. This is the natural future for a technology that is maturing. This is where the Kinect is headed – not only out into the world but also up in our faces. We want and need to get closer to the Kinect.


The Razorfish Emerging Experiences team takes on ReMIX South

Aug 07, 2011 by in Kinect, Mobile, News, Technology

ReMix South

The Razorfish Emerging Experiences team showed up in force for the ReMIX South conference. Luke Hamilton presented “The Interface Revolution”, a discussion about emerging tablet technologies and what they mean for consumers. He also provided best practices for creating tablet experiences and key insights on how to bring these interfaces across multiple devices. Jarrett Webb presented “An Introduction to Kinect Development” providing insight on how to get started building experiences for the Kinect hardware. Steve Dawson and Alex Nichols were “Kinecting Technologies” which recreated scenes from famous Sci-Fi movies utilizing the Kinect combined with other advanced technologies.

While not presenting at the event, the team enjoyed presentations by Albert Shum, Arturo Toledo, Rick Barraza, Josh Blake and many other experts in the fields of Kinect, Tablet/Mobile development and UX/Design.

For those who are interested, we encourage you to download the code for the Kinecting Technologies presentation. In order to run the samples, you’ll need:

Additionally, the voice-control home automation sample requires the X10 ActiveHome Pro Hardware and the X10 ActiveHome Pro SDK.

Thanks go out to the organizers of ReMIX South for putting together a wonderful event. We’ll see you next year!

Watch the session videos here.


DaVinci Kinect featured in WIRED magazine

Jul 23, 2011 by in Kinect, News

Last month, WIRED magazine published an article entitled “A Thousand Points of Infrared Light” which highlights the effect that Kinect has had in transforming the way that we interact with digital experiences. Timed around the announcement of the official Kinect SDK, the article focuses initially on how this revolutionary device allowed researchers in robotics to take their experiences out of the lab with little cost (the Microsoft Kinect is an add on peripheral that costs $150).

It’s a great read on the subject on how a company can embrace the “hacker” community, thereby supporting innovation well beyond a products’ intended use. The first couple of weeks of Microsoft’s stance on Kinect hacks was uncertain, however Microsoft soon embraced the community.

WIRED researched the thousands of Kinect hacks in the wild and invited a handful of researchers, visual artists and technicians to be included in the magazine. We were excited to be included. Check out the iPad version of the magazine to see video of our DaVinci Kinect experience.

Thanks again to WIRED for the props and we look forward to sharing more experiences with everyone in the near future.

Tags:

DaVinci Kinect Painting the Town at E3

Jun 03, 2011 by in Microsoft Kinect, News

Back in November 2010, we posted a video of a little Microsoft Kinect app we called “DaVinci Kinect.” It’s a prototype we originally built for Microsoft Surface that blurs the lines between the physical and virtual world.

But as soon as we got our hands on the Kinect hardware, we updated the app to take advantage of the new platform and interactions –  as well as extended the technology to recognize hand/figure gestures. With our latest iteration, hand gestures are used to create objects and control the physics of the environment.  The user’s hands appear in the interface which allows one to literally grab objects out of thin air and move them in the environment. Additional gestures allow folks to affect gravity, magnetism and attraction.

After the blog was posted, we received a ton of attention from the likes of Gizmodo and Engadget. And now, we have an opportunity to demo the app at E3!  We’ve been working on a version for the Microsoft Surface v2 as well, so we’ve integrated the new graphics, interactions and a fun little homage to Mr. Lucas.

We’ll post footage of the event next week. Hope to see you there!


Thoughts on MIX 11: Looking Beyond the Web

Apr 20, 2011 by in Experience Design, News, Technology

This year, Razorfish sent several of our people to MIX 11, the annual Microsoft sponsored conference in Las Vegas for developers and designers.

So much happened during our week at MIX  that it is difficult to summarize it all thematically.   There were announcements and sessions on several major topics: IE9, HTML5, ASP.NET MVC 3, Silverlight 5, Windows Phone Mango release, and the Kinect SDK. In addition, there were also appearances from MS Surface v2, Windows Azure, oData and Sharepoint as well as a remarkable set of UX presentations.

Mix11 Keynote Sketch

The word on everyone’s lips seemed to be fragmentation, whether in reference to the expected HTML5 compatibility issues between future browsers (which the emphasis on the IE9 “native” browser experience only exacerbated) or to the greater array of Microsoft development technologies fighting for developers’ attentions.

What the four Razorfish attendees at MIX saw, on the contrary, were patterns of evolution.  The much ballyhooed struggles between the Windows Team and the Development Team inside Microsoft for the future of HTML5 and Silverlight indicate to us that Microsoft can still respond to a rapidly changing worldwide technology ecosystem.  When a product is struggling in the niche it was doing fine in a year ago, it can be refitted to survive in a new niche. Such is the case with Silverlight, originally intended as a Flash-killer.  Silverlight developers never truly adopted the original Flash-killer strategy and instead used Silverlight to develop more sophisticated and interesting line-of-business applications.  The problem is that LOB applications do not really belong on the web.  They belong behind firewalls.  The lack of casual games written in Silverlight likely affected the ability of Silverlight to force downloads and gain browser share.  So instead, the strengths of Silverlight are being moved to the desktop as well as specialized platforms such as Windows Phone, the XBOX (?), and possibly Windows vNext.

WPF, which was once the pre-eminent desktop development platform, is in turn becoming a specialized tool for NUI development for multi-touch, Surface and Kinect.  The announcement of the Kinect SDK itself demonstrates Microsoft’s continuing ability to innovate and surprise.  It is, in the best sense of the term, a fortuitous mutation.

This all leaves HTML5 as the preferred technology for the web.  We of course see the early signs of browser compatibility issues. At the same time, though, we have each been through this before and survived. The extra gyrations developers will have to go through will, in the end, provide the illusion consumers desire – that the same application can run similarly on any operating system and any device.  As one MIX speaker put it, “The technology you use impresses no one.  The experience you create with it is everything.”

Windows Phone 7

Speaking of devices, we are excited to see that the WP7 team is not only going for parity with other smart phones but is firing warning shots across their bows with the much touted Mango release.  Features we’re used to like multitasking are being expanded beyond current implementations with updating live tiles and “Live Agents” which allow for more full-featured multitasking.

There was naturally some complaining about the placement of various keynotes and sessions.  With the multiple announcements and cross-blocking sessions, isn’t there a danger that individual messages will get drowned out in the general cacophony?  We find that the panoply of conflicting viewpoints is one of the chief charms of MIX. Microsoft is not Apple.  To borrow from Isaiah Berlin’s famous title, Apple is the hedgehog that does one thing well; Microsoft is the fox that explores all avenues and experiences.  The great strength of Microsoft is its ability to challenge developers and create new harmonies out of these encounters. Should MIX ever be split up into different web, Silverlight, Windows Phone and UX conferences, we would all be poorer for it since all we would ever get would be our own opinions reflected back on ourselves – an echo chamber effect that will only serve to make us all deaf.

The overall quality of all sessions and boot camps were extremely high this year.  In the past, we have been happy with a 60% success rate on talks.  This year roughly 85% of the talks rang our internal bells. Certain sessions deserve a special shout out, however.

While all the UX lightning talks were extraordinary,  August de los Reyes’s 21st Century Design (10’ 45”) talk took it to a different level.  In the live session, the slide deck itself was the star with the brilliant August narrating it much as Peter Jones was the voice of the book in the old Hitchhiker’s Guide to the Galaxy television series.

Despite its inauspicious title, Ivan Tashev’s talk Audio for Kinect revealed what a truly remarkable device the Kinect really is. We honestly didn’t understand half of the technical stuff and we became queasy when formulas started flying across the screen. What we learned, though, was that only a fragment of the Kinect’s full audio capability is currently being used.  Dr. Tashev demonstrated the ability of the Kinect’s audio algorithms to pick out two separate speakers, one reading Chinese and the other reading Korean, and separate them into different channels.  All of this cool functionality will, moreover, be handed over to developers when the Kinect SDK beta is released at the end of spring.

Finally, we cannot say enough good things about Luis Cabrera and his willingness to demonstrate the Surface 2 at work in A Whole NUI World. Razorfish, of course, has a special affinity for anything Surface. What was outstanding in this presentation was not only the beauty and power of the new Surface devices but also the amount of thought that has gone into the tooling. Kudos to the Surface team, they’re reaching for a goal that is more than just a new technology but a new way for people to interact with computers and each other.

By the end of MIX, we were all quite exhausted mentally and physically. It may take us a full year – until the next MIX – to finish ingesting everything that we learned and experienced at MIX11.

So long, Microsoft, and thanks for all the Kinects.


DaVinci Goes Touchless With XBox Kinect

Dec 02, 2010 by in Microsoft Kinect

The launch of Xbox Kinect has caused much excitement in the open source community. In the last few weeks, developers have managed to tap into the hardware with impressive results. We’ve seen applications ranging from gesture-based experiences to 3D imaging.

We’ve taken this exciting opportunity to port our popular DaVinci experience to the Kinect platform. Gestures are used to create objects and control the physics of the environment. Your hands appear in the interface which allows you to literally grab objects out of thin air and move them in the environment. Additional gestures allow you to affect the gravity, magnetism and “planetary attraction”.

To date, many of the experiments in gestural interface development have not taken into account the hands. Unfortunately, the result is an experience that isn’t precise – users have no context of where they are interacting in the virtual space and 1-to-1 manipulation of objects in a scene proves difficult. By using an clenched hand to signify “grabbing” an object and an open hand to signify “releasing” an object, we are able to create experiences with an higher level of precision which can mimic a touch based experience. In fact, we’ve created a Kinect plugin to enable our entire suite of touch-based experiences to work with gestures – more videos to come!

Gesture-based interaction is great when touch isn’t practical. For instance, on a large screen projected display as shown in the video above it is difficult or physically impossible to control the entire area using touch. Using a technology like Kinect, we can create a virtual canvas in mid-air in front of the user. Interactions within this virtual canvas space are projected into the experience as shown in the DaVinci example.

To be honest, we had a blast playing with this experience. It definitely fulfilled all of our Star Wars fantasies of controlling objects with your mind. We’ll be adding more features in the coming weeks including the Darth Vader death grip. Stay tuned!

“Control, control, you must learn control.” – Yoda