Get your hands on the 5D experience by embarking on a unique shopping journey that utilizes a variety of platforms and technologies, including a first of it’s kind, seamlessly-synchronized transparent interactive display wall. It’s located in the Microsoft booth (1005) on Level 3. And to see more of 5D in action, head on over to emergingexperiences.com/5D.
At the same time the //Build/ conference was going down in Redmond, Washington, I was next door in Seattle for the Seattle Interactive Conference (SIC://). Besides a fondness for forward slashes, these two conferences shared a common interest in the future of technology. //Build approached this topic from the software side while SIC:// did it from the design and agency side. The Kinect for Windows technology, interestingly, was present at both events.
I was invited to SIC:// in order to represent EE on a panel about Natural User Interfaces. It was an amazing panel that included David Kung from Oblong, Matt von Trott from Assembly Ltd, Scott Snibbe from Snibbe Interactive and John Gaeta of FLOAT Hybrid. Our conversation about what NUI means today was preceded by an amazing fifteen minute talk by Oscar Murillo that showed off many K4W techniques in a holodeck-like demo. You can read more about the panel here and here. It was expertly moderated by Steve Clayton of Microsoft.
What made the event fascinating for me was the time I got to spend with the other panelists before our talk and after. There was a clear trajectory in our backgrounds. John is involved in the motion picture industry and helped design many of the futuristic movies (like The Matrix) that have inspired the rest of us to work with bleeding-edge interface technology. Dave’s company brought forward advanced academic research to actually realize Minority Report (one of Oblong’s founders helped design the gestural interface Tom Cruise uses in the movie). Microsoft turned gestural interfaces into a consumer technology. Matt, Scott and I are using it for retail and marketing which will help fund and expand the proliferation of gestural sensors. Our collective goal is to create technology that anticipates and responds to our desires rather than simply frustrating us on most days.
We want to use technology, when it comes down to it, to hide the presence of technology in our everyday lives.
In early October, the Emerging Experiences practice’s San Francisco office brought our Razorfish 5D retail platform to Oracle OpenWorld. Within this global event was the first ever Customer Experience Summit. This event gathered industry leaders together to discuss strategies for driving customer-centric initiatives while interacting with some of the most future-forward experiences and minds.
Emerging Experiences set up our Razorfish 5D retail experience in beautiful Union Square park. We demonstrated how a seamless customer journey can cross over touch tables, gestural sensors, digital screens, tablets and mobile apps to transform the retail experience.
The 5D installation for Oracle CX showed how each element of the contemporary brick-and-mortar store can be enhanced and streamlined. Digital displays, smartphones and HD touch tables communicated with each other to provide infinite shelves as well as an immersive experience to tell the stories behind the store brands.
Tablet software provided store associates with the opportunity to not only help shoppers select items, but even interact with their customer’s smartphones. The 5D retail experience also demonstrated how virtual dressing rooms with augmented reality can enhance the retail experience. Each of these touch-points in turn generates massive amounts of data about the sales process.
Sharing our retail story with the attendees at the Oracle Customer Experience Summit was both extremely rewarding and entertaining. We look forward to returning next year.
We’re excited by the launch of a revolutionary showroom experience for a premiere automotive brand. After a year of collaboration between Audi and a wide range of partners, Audi City has launched near Piccadilly Circus in London, ahead of the 2012 Olympics.
Audi City London is a groundbreaking dealership experience delivered by one of the most technologically advanced retail environments ever created. The digital environment features multi-touch displays for configuring your Audi vehicle from millions of possible combinations. Your personalized car is visualized in photorealistic 3D using real-time render technology, making the Audi City vehicle configurator the most advanced in the world. After personalizing your Audi, you can toss your vehicle onto one of the floor-to-ceiling digital “powerwalls” to visualize your car configuration in life-size scale. From here, you can use gestures to interact with your personalized vehicle, exploring every angle and detail in high resolution using Kinect technology.
A purely digital showroom can’t deliver on the tactile experience of buying a car. Therefore, a store associate can save your configuration on a RFID-enabled USB stick and guide you into a personal consultation area that features a variety of tactile objects. These objects help the customers get hands-on with the materials of the vehicle including car exterior color and finish options and interior upholstery options. Each of these tangible objects are digitally-tagged through RFID technology. You can bring bring any of these physical objects over to the configurator experience and the corresponding exterior paint finishes and interior options will automatically update your vehicle configuration.
When purchasing a car, the customer journey occurs across multiple channels. In order to integrate and simplify the car buying process, we’ve allowed customers to retrieve their online car configurations in the showroom environment. In addition, any car configuration made in the showroom is synchronized to your personal USB stick. Simply pop in the USB stick at home and the web-based configurator is automatically launched with the exact car configuration you created in the showroom. This allows Audi to deliver a “start anywhere, end anywhere” buying cycle for the customer, which has proven elusive for retailers.
Not only is Audi City a premier showroom environment, the dealership concept represents a fundamental shift in retail strategy for the brand. This new small-footprint retail format brings Audi closer to their customers, not only geographically but also emotionally. The smaller-footprint concept will launch in metropolitan environments and reach a younger urban and digitally-enabled demographic. After hours, the environment will serve as a cultural center in the larger community by playing host to readings, round-table discussions and art exhibitions.
“Audi City combines the best of two worlds – digital product presentation and personal contact with the dealer” says Peter Schwarzenbauer, Member of the Board of Management at Audi. “People are placing greater emphasis than ever before on a direct and personal bond of trust with their vehicle brand – especially in respect of the increasing variety of products and available information. Thus, with Audi City we are creating a one-stop-shop for experiencing our brand. It is right in the midst of our customers’ lives, yet seamlessly connected to the online range offered by the four rings.”
Audi announced at the London launch that 20 showrooms in other major international cities will follow by 2015.
Anticipation has been building for years.
The expectation has always been that our lives will be transformed by new technologies. Everything from travel to sports and entertainment would be made new again…redefined.
And now, thanks to Delta and Madison Square Garden in partnership with Razorfish, that time has finally arrived.
Delta Air Lines’ Touch the Future of Travel and a newly refreshed yet still iconic Madison Square Garden is here.
In addition to the 11,000 square foot lounge which features select menus, multi-screen event coverage, and a clear view of professional athletes entering the arena through a glass hallway, we’ve created a unique experience for VIPs.
A personalized, curated way for travelers to discover new destinations, collecting content from around the globe and enjoying fantastic vistas that transport them into the magic of destination travel and discovery.
Delta’s Touch the Future of Travel is about unique inspiration, easier access to what you want, when you want it, and sharing travel ideas with friends…and Razorfish with Delta is making it all happen.
After months in the making, Beginning Kinect Programming with the Microsoft Kinect SDK, published by Apress and written by Emerging Experiences team members Jarrett Webb and James Ashley, is now in print. The book provides an introductory guide to building Kinect applications using Microsoft’s Kinect for Windows SDK v1.0. It has been on the hot technical releases list on Amazon based on pre-orders alone for the past several weeks. It then managed to sell out on its first day of availability on Amazon. The inventory, we have been told, will be restocked by this Monday, March 5th, 2012.
Click Here to Purchase/Reserve Your Copy!
Emerging Experiences has been approached before about writing books, but Kinect was the first topic we felt excited enough about to actually want to carry through with such an endeavor. We have never seen the Kinect sensor as merely a gaming device. Instead, we view it as a radical evolution in human-computer interfaces. In the same way that adding touch capabilities to a phone makes it “smart”, putting Kinects in the world is the first step in making our environments “smart”. Rather than a mere novelty, we view the Kinect as a doorway to the future. Beginning Kinect Programming with the Microsoft Kinect SDK is intended to show developers how to walk through that door.
The authors began work on Beginning Kinect Programming with several goals in mind. The primary objective was to share our knowledge of the Kinect as well as many of the techniques we have learned to build Kinect experiences. In this regard, it is of the rare books on Kinect that addresses developers rather than artists and designers. While the book is an introductory book on the Kinect, it is written for experienced developers. The code examples are in C# and leverages WPF because it is the most powerful and rich UI platform. This book provides enough information for other developers to build the sorts of Kinect experiences we build everyday on the Emerging Experiences team. We wanted to share our secrets so others can help us push the Kinect technology to its limits. After months of writing and constant rewriting to keep up with the constantly changing Kinect for Windows SDK, we feel we have met these goals. It is, if nothing else, the sort of book we wish we had when we started our first primitive experiments with the Kinect over a year ago.
Features of the book include:
- Quickly start building applications within the first 15 pages
- Complete coverage of the Kinect for Windows SDK v1.0 API
- A complete history of the Kinect
- Teaches how to manipulate Kinect images using common image processing techniques and tools
- Demonstrates unique ways to use depth data
- Teaches how to take snapshots of users
- Illustrates how to turn a user’s hands into cursors
- Details a framework for capturing poses
- Provides an introduction to gesture detection techniques, including code demonstrations of the Wave, Swipe, Button Push and more
- Presents an extensive set of fully functional games and applications as well as useful tools
Fresh out of R&D from the Razorfish Emerging Experiences team is a product code-named “5D”. 5D started out as an idea to re-invent personal shopping. Our goal was to create a retail experience platform for both consumers and sales associates that enables multi-channel sales through immersive and connected digital devices in retail environments. And the only way to do it is to seamlessly integrate five key components – devices, content, experiences, analytics and CRM with a touch of digital magic!
The team announced 5D at the 2012 NRF Convention & Expo in New York City in partnership with NEC and Microsoft. Leveraging Windows Embedded, Microsoft Surface, MS Tag, Windows Phone and Kinect for Windows we created a prototype around a fictitious brand “Razorfashion” that demonstrates how various touch points along the customer journey can attract consumers into the store, drive product engagement and arm store associates with more contextualized digital tools.
You can read the full press release here
We recently partnered with London-based technology company, Bodymetrics, to develop a means for online shoppers to buy clothes from the comfort of their couch. Whattya mean big deal? Well, did we mention that the clothes are guaranteed to fit?
Yup, thanks to BodyMetrics’ 3D body-scanning technology, which is based off of the same PrimeSense scanners and camera tech as the Microsoft Kinect, shoppers are able to have their body dimensions scanned in and saved to an online profile. Just think of it like the transport room in Star Trek … if Scotty had a bit of an online shopping problem.
Once users have created their profile and saved their body data, they can virtually try on a wide range of clothing types such as jeans, dresses, skirts and tops from tons of partner retailers. As each piece of clothing is mapped to the on-screen avatar’s body, the user is able to see the exact fit of the item thanks to a visual overlay that depicts the tight spots of the garment. No more guessing games when you buy that pair of jeans online – you get the perfect fit, every time.
The icing on the cake – retailers get to benefit from a drastic drop in their store return rates since their customers can finally purchase with confidence. That, coupled with the exponential momentum and increased basket-size of eCommerce purchases means great things for apparel companies. Plus, you don’t have to listen to some phony sales associate squawking about how fabulous you look in those jeans – just take a look for yourself!
As we approach the one year anniversary of the Kinect launch, Microsoft has announced that the Kinect for PC Commercial SDK will be released in early 2012 (http://majornelson.com/2011/10/31/xbox-360-celebrates-one-year-anniversary-of-the-kinect-effect/). More than 200 businesses worldwide, including Toyota, Houghton Mifflin Harcourt and Razorfish, are involved in a pilot program to explore the commercial possibilities of the Kinect.
Until now, most companies working with the Kinect have been working within the constraints of a research license for the Kinect SDK. Consequently the applications that corporations have been working on have been restricted to tightly held private projects or, at most, proof-of-concept projects visible only as demo reels on the Internet. While most people are at least aware of the Kinect technology, the terms of the research license has relegated it to being an afterthought or something only understood at a distance – a nice to have.
The recent announcement of the timeline for the commercial license implicitly green lights these projects to make preparations for releasing Kinect-enabled applications for everyday use. Over the next year we can expect to see the Kinect as a ubiquitous part of our daily environments and something just as prevalent as interactive kiosks are today. The spread of the Kinect beyond the living room may be as dramatic as the proliferation of smart phones or tablets – one day no one knew what they were and, the next, everyone seemed to have one. In boardrooms across America, the question will no longer be one of whether to have a Kinect strategy but instead what that strategy is.
As the Kinect becomes more prevalent in our daily lives, the possibilities and limitations of the Kinect will undergo much closer scrutiny. The potential offered by a mass produced device that provides a video camera, an infrared depth camera and a four microphone array with beamforming capabilities is vast. The technology can be taken in multiple directions including computer vision in robotics, 3D modeling with multiple linked devices, inexpensive augmented reality, hands-free interactive experiences, speech recognition based in-store assistance and innovative computer assisted learning.
That Microsoft’s visionary strategy in designing the Kinect has revolved around off-loading processing to the operating system rather than building it solely into the hardware means that complex scenarios not currently supported by the Xbox can be made viable through improved software and processing power on computers and video cards, the price of which are constantly falling. Microsoft’s Kinect technology is actually scalable and does not require improving the Kinect hardware itself but, instead, on simply improving the software that processes the data streamed by the Kinect.
This all leads to the inevitable question – what is the future of the Kinect? After a year, what are second generation Kinect applications going to look like? The answer depends on where Microsoft takes Kinect software going forward. The current research version of the Kinect SDK beta shows its roots in gaming. The visual processing, depth processing and even acoustical models are tied to the limitations and optimizations required for the Xbox 360 gaming system. They all work best in a room about the size of your living room and even begin to have troubles in small apartments. The microphone array seems to work well in standard rooms, for which it has painstakingly been optimized to deal with surround sound speakers and audio reflections off of furniture, but appears to have trouble in large spaces.
Strikingly, even though the depth camera is capable of 640 x 480 resolutions, the current SDK only provides access to 320 x 240 image streams. The Kinect SDK, likewise, does not provide depth data information for objects within 800 mm (about 2 ½ feet) of the Kinect sensor even though the camera does capture this information.
There are clearly performance reasons for setting these limitations. However part of the problem also appears to be related to the fact that the USB connector for the Kinect is a bottleneck and has been throttled for the particular USB controller configuration requirements of the Xbox. As the Kinect moves out of the living room and into the real world, it makes sense to leave the restrictions imposed by tying the Kinect SDK to the Xbox behind. If we can use improved software running on improved hardware to boost the capabilities of Kinect for PC applications, it would be a shame to have a gaming infrastructure be the main showstopper.
Nowhere is this more clear than when we consider using the Kinect in the office. As a Kinect developer, I have to slide my chair back and away from my monitor whenever I want to debug a piece of code. Fortunately I don’t work in a cubicle and have some open space behind me. I am also fortunate that my chair has wheels and I have the code – slide – code routine down pat. However I don’t see anyone wanting to use a Kinect-enabled business application in this way. Unlike the living room, which is the natural space of our home lives, the office environment of our work lives is generally cramped and close to the screen with just enough room for a keyboard between us and our monitor. We are always within two and a half feet of the objects we work with.
Yet the workspace is one of the chief places we want to see our Kinects working. And instead of large arm movements, we would like to wave our hands or snap our fingers in order to make things happen on our screens. We want The Minority Report writ small. In order to achieve this, in turn, we need to move beyond skeletal tracking and start enabling fine finger tracking.
Along the same lines, for larger movements, the skeletal tracking capabilities of the Kinect only work with the full body. At the office, sitting in our office chairs, we typically never see anything below the waist. Even skeletal tracking, then, needs to be modified to take this into account and to support partial skeleton tracking at the software level.
As the Kinect is being allowed to travel beyond our living rooms with the upcoming release of the commercial Kinect SDK, the software that allows developers to build applications for the Kinect needs to cut its strong dependence on gaming scenarios. This is the natural future for a technology that is maturing. This is where the Kinect is headed – not only out into the world but also up in our faces. We want and need to get closer to the Kinect.
In 2010, Microsoft released Kinect – a controller-free gaming and entertainment experience for the Xbox 360. Your body is the controller – joysticks and buttons are replaced with the users’ movements and gestures. It turns out Kinect has many uses beyond games and entertainment. Razorfish’s Emerging Experiences team created KinectShop to demonstrate the use of the Kinect platform in as a retail or at-home augmented reality shopping experience.
KinectShop allows shoppers to cycle through an assortment of products, in this case purses, and visualize the products as part of their outfit, thereby better informing the purchase decision. The natural interaction offered by the Kinect platform allows the shoppers to quickly develop a 1-to-1 connection with the product through the use of augmented reality. In augmented reality, shelf space is infinite, so while this concept experience is limited to purses it could host entire catalogs of products, such as clothing, hats, sunglasses, shoes, jewelry, makeup and more.
As an in-store experience, the retailer can bring catalog and online inventory into the store without actually having the inventory on hand. Further, it allows a shopper to still try on inventory not available in the store or out of stock, capturing a sale that might otherwise be lost. Ultimately, the shopper can decide that they like the product and add it to their shopping cart or wishlist.
Because the experience is virtual, it presents possibilities to become portable and even transcend beyond the store. With an experience like KinectShop, a shopper can easily scan a QR code or swipe their NFC smartphone to take their experience with them and use wayfinding tools to locate the product in-store. Additionally, shoppers could later retrieve their wishlist at home using the company’s web site, tablet, mobile experience or even an Xbox or PC version of the experience in their living room.
Shopping is inherently a social activity and the experience could not only support multiple users simultaneously but it also has the natural ability to leverage social tools. For instance, pictures taken with the virtual products can be shared through Facebook and Twitter to help solicit feedback from friends.
We plan on leveraging the Kinect platform to enhancing the experience further in future versions. For example, user recognition could help record and save preferences intuitively to your profile. Microphones can be used to employ voice commands – for example, saying “I love it” automatically adds the item to the wishlist. The experience will one day even offer recommendations of coordinating items based on the colors in the clothes that shoppers are wearing.
As you can see devices like Kinect clearly have uses beyond the gaming console. We are just scratching the surface on the types of experiences this technology will enable in the future.
Read more about it over at Fast Company.