Author Archive

Audi City: Inventing the Dealership of the Future

Jul 19, 2012 by in 5D, Experience Design, Kinect, Microsoft Kinect, Mobile, Multi-touch, News, Portfolio, Retail, Technology, Touchscreen

We’re excited by the launch of a revolutionary showroom experience for a premiere automotive brand. After a year of collaboration between Audi and a wide range of partners, Audi City has launched near Piccadilly Circus in London, ahead of the 2012 Olympics.

Piccadilly Circus in London

Audi City London is a groundbreaking dealership experience delivered by one of the most technologically advanced retail environments ever created. The digital environment features multi-touch displays for configuring your Audi vehicle from millions of possible combinations. Your personalized car is visualized in photorealistic 3D using real-time render technology, making the Audi City vehicle configurator the most advanced in the world. After personalizing your Audi, you can toss your vehicle onto one of the floor-to-ceiling digital “powerwalls” to visualize your car configuration in life-size scale. From here, you can use gestures to interact with your personalized vehicle, exploring every angle and detail in high resolution using Kinect technology.

credit: Audi

A purely digital showroom can’t deliver on the tactile experience of buying a car. Therefore, a store associate can save your configuration on a RFID-enabled USB stick and guide you into a personal consultation area that features a variety of tactile objects. These objects help the customers get hands-on with the materials of the vehicle including car exterior color and finish options and interior upholstery options. Each of these tangible objects are digitally-tagged through RFID technology. You can bring bring any of these physical objects over to the configurator experience and the corresponding exterior paint finishes and interior options will automatically update your vehicle configuration.

credit: Audi

When purchasing a car, the customer journey occurs across multiple channels. In order to integrate and simplify the car buying process, we’ve allowed customers to retrieve their online car configurations in the showroom environment. In addition, any car configuration made in the showroom is synchronized to your personal USB stick. Simply pop in the USB stick at home and the web-based configurator is automatically launched with the exact car configuration you created in the showroom. This allows Audi to deliver a “start anywhere, end anywhere” buying cycle for the customer, which has proven elusive for retailers.

Not only is Audi City a premier showroom environment, the dealership concept represents a fundamental shift in retail strategy for the brand. This new small-footprint retail format brings Audi closer to their customers, not only geographically but also emotionally. The smaller-footprint concept will launch in metropolitan environments and reach a younger urban and digitally-enabled demographic. After hours, the environment will serve as a cultural center in the larger community by playing host to readings, round-table discussions and art exhibitions.

credit: Audi

“Audi City combines the best of two worlds – digital product presentation and personal contact with the dealer” says Peter Schwarzenbauer, Member of the Board of Management at Audi. “People are placing greater emphasis than ever before on a direct and personal bond of trust with their vehicle brand – especially in respect of the increasing variety of products and available information. Thus, with Audi City we are creating a one-stop-shop for experiencing our brand. It is right in the midst of our customers’ lives, yet seamlessly connected to the online range offered by the four rings.”

Audi announced at the London launch that 20 showrooms in other major international cities will follow by 2015.

Tags:

The First Official Microsoft Kinect SDK Book is Finally Here!

Mar 02, 2012 by in Microsoft Kinect, News

After months in the making, Beginning Kinect Programming with the Microsoft Kinect SDK, published by Apress and written by Emerging Experiences team members Jarrett Webb and James Ashley, is now in print. The book provides an introductory guide to building Kinect applications using Microsoft’s Kinect for Windows SDK v1.0. It has been on the hot technical releases list on Amazon based on pre-orders alone for the past several weeks.  It then managed to sell out on its first day of availability on Amazon. The inventory, we have been told, will be restocked by this Monday, March 5th, 2012.

Click Here to Purchase/Reserve Your Copy!

Emerging Experiences has been approached before about writing books, but Kinect was the first topic we felt excited enough about to actually want to carry through with such an endeavor. We have never seen the Kinect sensor as merely a gaming device. Instead, we view it as a radical evolution in human-computer interfaces. In the same way that adding touch capabilities to a phone makes it “smart”, putting Kinects in the world is the first step in making our environments “smart”. Rather than a mere novelty, we view the Kinect as a doorway to the future. Beginning Kinect Programming with the Microsoft Kinect SDK is intended to show developers how to walk through that door.

The authors began work on Beginning Kinect Programming with several goals in mind. The primary objective was to share our knowledge of the Kinect as well as many of the techniques we have learned to build Kinect experiences. In this regard, it is of the rare books on Kinect that addresses developers rather than artists and designers. While the book is an introductory book on the Kinect, it is written for experienced developers. The code examples are in C# and leverages WPF because it is the most powerful and rich UI platform. This book provides enough information for other developers to build the sorts of Kinect experiences we build everyday on the Emerging Experiences team. We wanted to share our secrets so others can help us push the Kinect technology to its limits. After months of writing and constant rewriting to keep up with the constantly changing Kinect for Windows SDK, we feel we have met these goals. It is, if nothing else, the sort of book we wish we had when we started our first primitive experiments with the Kinect over a year ago.

Features of the book include:

  • Quickly start building applications within the first 15 pages
  • Complete coverage of the Kinect for Windows SDK v1.0 API
  • A complete history of the Kinect
  • Teaches how to manipulate Kinect images using common image processing techniques and tools
  • Demonstrates unique ways to use depth data
  • Teaches how to take snapshots of users
  • Illustrates how to turn a user’s hands into cursors
  • Details a framework for capturing poses
  • Provides an introduction to gesture detection techniques, including code demonstrations of the Wave, Swipe, Button Push and more
  • Presents an extensive set of fully functional games and applications as well as useful tools

The Razorfish Emerging Experiences team takes on ReMIX South

Aug 07, 2011 by in Kinect, Mobile, News, Technology

ReMix South

The Razorfish Emerging Experiences team showed up in force for the ReMIX South conference. Luke Hamilton presented “The Interface Revolution”, a discussion about emerging tablet technologies and what they mean for consumers. He also provided best practices for creating tablet experiences and key insights on how to bring these interfaces across multiple devices. Jarrett Webb presented “An Introduction to Kinect Development” providing insight on how to get started building experiences for the Kinect hardware. Steve Dawson and Alex Nichols were “Kinecting Technologies” which recreated scenes from famous Sci-Fi movies utilizing the Kinect combined with other advanced technologies.

While not presenting at the event, the team enjoyed presentations by Albert Shum, Arturo Toledo, Rick Barraza, Josh Blake and many other experts in the fields of Kinect, Tablet/Mobile development and UX/Design.

For those who are interested, we encourage you to download the code for the Kinecting Technologies presentation. In order to run the samples, you’ll need:

Additionally, the voice-control home automation sample requires the X10 ActiveHome Pro Hardware and the X10 ActiveHome Pro SDK.

Thanks go out to the organizers of ReMIX South for putting together a wonderful event. We’ll see you next year!

Watch the session videos here.


CES 2011 Recap

Jan 13, 2011 by in News, Technology

The Consumer Electronics Show was back for 2011 and our team was on the ground in Las Vegas. We have a number of initiatives going on at CES this year.

First, our team was involved in the Microsoft Surface 2.0 launch. We’ve been working with the Surface team for a few months on the next generation of Surface. We’ve been porting our applications to run on the latest version. We can proudly announce that we are Surface 2.0 ready and we look forward to supporting the new platform and bringing the solution to our clients. The Microsoft Surface announcement caught the media by surprise – it’s been over 3 years since the original Surface was announced. The new device is faster, leaner and costs less than the previous version of Surface. We’ll have an in-detail analysis of Microsoft Surface 2.0 posted on the blog shortly.

Second, we were involved in the launch of another experience for one of our clients. We created a solution that will be experienced by millions of consumers in the market. Unfortunately our involvement must remain confidential so we can’t go into too many details. Let’s just say it was definitely one of those opportunities that we could not pass up!

We took the opportunity to explore the trade show floor in an effort to educate ourselves on the latest technology offerings. We hope to bring some of these technologies to our clients in 2011. Here are some of the technologies that we’ve got our eye on.

Tablets

2011 has been declared the “year of the tablet”. There were certainly no shortage of tablets at CES. In fact, about 80 new tablet form-factor devices were announced at CES this year.

From a hardware perspective, tablets are getting thinner, lighter and more powerful thanks to innovation around chip technology from companies like Intel, ARM, nVidia and Qualcomm. There are a variety of new form-factors hitting the market. The Eee Pad Transformer tablet can be docked in a base which transforms the device into a traditional laptop form-factor. The Dell Inspiron Duo tablet features a reversible screen to accomplish the same thing.

There were a variety of different screen sizes available. One of the interesting debates between amongst members of our team was around the usefulness of the small screen tablets. These “tweener” devices feature screens between the size of a typical phone and an iPad. The smaller size means they are more portable than an iPad, however they still can’t fit in your pocket and they can’t make phone calls.

One of the most impressive devices was the BlackBerry Playbook. The device features a brilliant user interface which makes use of NUI design principals – direct interaction of content through the use of gestures. In addition, the performance of the device was exceptional. We can’t wait to start developing for this platform.

For the first time, we had the opportunity to see the new Android Honeycomb tablet OS. The exerience is decidedly Android retaining much of the same design language. Improvements have been made to the user interface to take into account the additional tablet real-estate. In all honesty, we were slightly disappointed with the user interface. We were hoping for something game-changing from Google and instead, they delivered an experience that was transitional, not transformational.

One of the major disappointments was the lack of direction from Microsoft on tablet devices. We were crossing our fingers for an announcement around a tablet operating system that was lightweight and provided an exceptional user experience similar to what is being provided on Windows Phone 7 platform. And we wanted this platform soon.

Microsoft did acknowledge they are behind in the space. Right now, their story is positioning Windows 8 as the solution for tablets by supporting system-on-a-chip architecture. By supporting this hardware platform, Microsoft will be able to deliver Windows experiences on tablet devices while taking into account battery life and OS performance.

Unfortunately, no announcements were made around the Windows 8 user interface. Delivering an exceptional tablet UI will be essential to their strategy. It is likely Microsoft will adopt the “Metro” design language currently being used for Windows Phone 7 and Microsoft Surface 2.0.

Gesture Control and Natural Interaction

With the release and success of Xbox Kinect, the gesture control market is heating up. Much like the original iPhone brought touch interaction into the mainstream by putting millions of devices in the hands of consumers, Xbox Kinect will do the same for gesture control. The way we interact with computers is fundamentally changing and we are getting in on the ground floor.

We’ve taken the opportunity to develop for the Kinect platform, however we were looking for a commercial-grade solution to bring to our clients. Enter PrimeSense.

PrimeSense licenses their technology to Microsoft for use in the Xbox Kinect, therefore they seemed like the perfect partner to deliver the hardware and software to support commercialized gesture control solutions. We are actively working with PrimeSense to develop for their platform. Their OpenNI initiative hopes to create a framework for standardization of natural interface development across devices.

We see gesture control technology being used in an in-home setting and also in retail environments. This technology can be utilized to create at-home shopping experiences which combine natural interaction and augmented reality. Imagine being able to virtually try on clothes from the comfort of your own home. Or order a pizza with a flick of the wrist from the comfort of your couch.

We have been champions of the use of interactive experiences in the retail environment and we have the statistics to prove it. To date, the majority of our experiences have utilized touch. This technology provides a new user interaction paradigm and offers an entirely new world of possibilities in the retail space.

Touch Screens

Touch screen technology is evolving rapidly. Devices are becoming larger, cheaper and more reliable. Exciting new form-factors and multi-touch hardware will help us deliver new experiences to our clients in 2011.

3M Touch Systems has exciting new hardware and form-factors hitting the market which utilize their massively multi-touch projected capacitive technology. This technology provides extremely stable multi-touch that supports a large amount of touch points. 3M is brining 23” and 32” screen sizes to the market. In additional, the screens can be integrated into a multi-device array to build large size touch wall and table experiences.

We also had some hands-on time with systems from PQ Labs and Multitouch.fi. Both vendors offer touch solutions that are unique and exceptional. We look forward to working with these companies in the future.

Display Technology

Displays are getting thinner, lighter and more energy efficient. 3D technology is also evolving quickly. Much like last year, 3D display technology was everywhere. The most impressive innovation in the 3D TV space comes from LG. They demonstrated how their 3D technology has been standardized – every TV on display in their booth could utilize the same pair of glasses to deliver an exceptional 3D experience. They also demonstrated flicker-less 3D which produced a better 3D picture than we’ve seen on any other consumer device.

The glasses-less 3D technology was a disappointment. There isn’t enough discernable depth with the current iteration of the technology. Certainly this will change over time, however the promise of ditching the glasses has yet to be fulfilled. We wouldn’t be surprised if this changes in 2011.

In-Car Technology

Ford had the major innovations in the automotive space. The Ford Focus Electric vehicle was announced along with an update to the MyFord Touch interface. The interface features a number of enhancements including the ability to visualize your destination and alert the driver if there isn’t adequate charge in the vehicle’s battery. In addition, an efficiency coach monitors your driving habits to advise changes to your driving style and an “Emotive Display” visualizes butterflies when you are driving in a way that adds range to your vehicle.

MyFord Mobile was also announced. The app allows you to locate charging stations, unlock doors and find the location of the vehicle. In addition, the app goes social with driving behavior monitoring – achievements are awarded once certain milestones are met. These achievements can be shared on Facebook.


DaVinci Goes Touchless With XBox Kinect

Dec 02, 2010 by in Microsoft Kinect

The launch of Xbox Kinect has caused much excitement in the open source community. In the last few weeks, developers have managed to tap into the hardware with impressive results. We’ve seen applications ranging from gesture-based experiences to 3D imaging.

We’ve taken this exciting opportunity to port our popular DaVinci experience to the Kinect platform. Gestures are used to create objects and control the physics of the environment. Your hands appear in the interface which allows you to literally grab objects out of thin air and move them in the environment. Additional gestures allow you to affect the gravity, magnetism and “planetary attraction”.

To date, many of the experiments in gestural interface development have not taken into account the hands. Unfortunately, the result is an experience that isn’t precise – users have no context of where they are interacting in the virtual space and 1-to-1 manipulation of objects in a scene proves difficult. By using an clenched hand to signify “grabbing” an object and an open hand to signify “releasing” an object, we are able to create experiences with an higher level of precision which can mimic a touch based experience. In fact, we’ve created a Kinect plugin to enable our entire suite of touch-based experiences to work with gestures – more videos to come!

Gesture-based interaction is great when touch isn’t practical. For instance, on a large screen projected display as shown in the video above it is difficult or physically impossible to control the entire area using touch. Using a technology like Kinect, we can create a virtual canvas in mid-air in front of the user. Interactions within this virtual canvas space are projected into the experience as shown in the DaVinci example.

To be honest, we had a blast playing with this experience. It definitely fulfilled all of our Star Wars fantasies of controlling objects with your mind. We’ll be adding more features in the coming weeks including the Darth Vader death grip. Stay tuned!

“Control, control, you must learn control.” – Yoda


RockstAR on Tour: Web 2.0 Expo San Francisco

May 09, 2010 by in Augmented Reality, Mobile, Multi-touch, Technology, Touchscreen

We took the show on the road for the Web 2.0 Expo in San Francisco. We worked with the Microsoft Tag team to bring the RockstAR augmented reality experience to the event.

web20-1

Since we were running the experience in the Microsoft booth, we decided to add some new characters – the most popular of which being Steve Ballmer:

ballmer_shot2

We used the experience as a way to engage with conference attendees and demonstrate an innovative use of Microsoft Tag technology. As conference attendees had their RockstAR snapshot taken, we’d ask them to download the tag reader application to their mobile device. Afterwards, they could take a snapshot of the Microsoft Tag and retrieve their photo. We took over 300 photos at the event.

web20-2

The RockstAR experience is another example of how you can use tag technology to extend an interactive in-store experience to a customers’ mobile device. Wishlists, shopping carts, mobile content delivery, product ratings & reviews and wayfinding are some of the examples of how tag technology can be used to change the way people shop in retail.

Check out our pictures from the event.


The Technology Behind RockstAR

Apr 13, 2010 by in Augmented Reality, Lab, Multi-touch, Technology

We recently had the opportunity to debut the RockstAR experience at SXSW – check out video of the experience in action. We like to think of it as the classic photo booth taken to the next level with augmented reality, multi-touch and social integration. Let’s go behind-the-scenes and take a look at both the software and hardware that brings this experience to life.

RockstAR

First, let’s talk software. The application was built on the recently announced Razorfish Vision Framework. The framework provides a platform to power augmented reality, gestural and other vision-based experiences. For the RockstAR experience, we are analyzing each frame coming from an infrared camera to determine if faces are found in the crowd. Once a face is detected, it is assigned a unique ID and tracked. Once we receive a lock on the face, we can pass position and size information to the experience where we can augment animations and graphics on top of the color camera feed. This technology has a variety of uses. For instance, face tracking can be used to track impressions on static or interactive digital experiences in the retail environment. Here is a screenshot taken from the debug-mode of the experience which shows the face tracking engine at work using the infrared camera.

face tracking

In addition to the vision-based technology, the experience was fully multi-touch enabled – users can gesture on a virtual joystick to swap out bands and snap pictures.

joystick

Because the classic photo booth experience is a social activity, we took it to the next level with twitter and Flickr integration. As pictures were snapped, we’d immediately make them available online. A QR code was rendered with each picture to quickly allow users to navigate to the RockstAR photo on their mobile device. Once the experience is extended to mobile, users can email the pictures to their friends, set it as wallpaper, re-tweet it to their twitter followers, etc.

RockstAR twitter and flickr

Let’s move on to hardware. Unfortunately, you can’t purchase infrared AR-ready cameras at your local Walmart… at least not until Project Natal comes out later this year. Therefore, we needed to build a dual-camera system that would support the face tracking in infrared and the color video feed for display on the screen. We decided to go with 2 commercial-grade Firefly MV cameras with custom lenses.

camera

One of the cameras we modified to see only infrared light by replacing the IR-blocking filter with a IR band-pass filter. This allows only a narrow range of infrared light to reach the camera CCD.

infrared filter

We also purchased and tested a variety of infrared illuminators. These are used to illuminate the environment with invisible infrared light allowing the infrared camera to accurately track faces in low-light conditions.

infrared illuminator

Sparks were flying as we fused the color and infrared cameras together — just another day at the office.

We created a portable rig for the camera and infrared illuminators. Adjustable camera mounts and industrial strength velcro provide flexibility and portability across a variety of installations.

rig2

We used a presentation remote clicker as an alternative way to drive the experience. We primarily used it as a remote camera trigger which allowed us to quickly snap pictures of unsuspecting people from a distance.

clicker

The experience was powered by a 55″ multi-touch screen and a CPU provided by DFI Technologies. We’ve been working with DFI to build PCs that will power the next-generation of interactive experiences. These PCs are small form factor and can be mounted behind the multi-touch screen.

dfi

Last but not least, we bring you the pink rug. We can’t reveal too much information about this technology… we need to keep some things secret. Just know that it is critical to the overall experience.

rug


Windows Phone 7 Series Launch – Day 3

Feb 18, 2010 by in Mobile, Multi-touch, News, Portfolio

Before we left for the evening, we recorded a quick walkthrough of the Windows Phone booth and EMC (Executive Meeting Center) locations where we have touch experiences deployed to support the Windows Phone 7 Series launch event.

Members of the press and blogging community have been recording video of the experience throughout the conference. These videos have begun appearing online – here are a couple of the videos we’ve found:


Windows Phone 7 Series Launch – Day 2

Feb 17, 2010 by in Mobile, Multi-touch, News, Portfolio

day2-boothday2-booth3

After a long night of celebrating the successful launch of Windows Phone 7 Series in Barcelona, we are back at the Windows Phone booth at Mobile World Congress. The crowds are still huge and the experiences are running great. Each experience is collecting touch and interaction information in the background – we are going to begin processing this information to determine how many sessions we are seeing, average session time, the most popular areas of the experience, etc. We will use this information as a guide to optimize the experience for the next event.

day2-booth2day2-crowd

The Windows Phone team is showing live projected demonstrations of the device in the theatre area – these demonstrations are attracting huge crowds.


Windows Phone 7 Series Launch – Day 1

Feb 16, 2010 by in Mobile, Multi-touch, News, Portfolio

day1-conference1

Members of the press camped out at the Windows Phone press lounge located across the plaza from Mobile World Congress. Because of the huge turnout for the announcement, much of the press watched the launch event live from the downstairs press lounge. After the show, we launched 6 experiences at this location allowing members the press to touch and interact with Windows Mobile 7 Series for the first time.

day1-conference21

Members of the press who weren’t able to watch the event in the theatre or the press lounge huddled around screens outside in the reception area. We went live with 2 experiences at this location.

day1-crowds1

Conference attendees watching the event live at the Windows Phone booth at Mobile World Congress. We had an additional 2 experiences running at this location.

day1-pictures1day1-pictures2

Cameras were out as the interface was unveiled for the first time. The phone interface design was kept a secret up until launch day. Preventing pictures and other leaks of information from making it to the press turned out to be a huge undertaking. The Windows Phone team went to great lengths to prevent leaks – in fact, many of the Microsoft employees working on the team never had the opportunity to see the interface until launch day. We based our experience off of some hands-on time in Redmond and videos of the experience. Our team was able to reverse-engineer the design, animation and interaction of the user interface. Accuracy was extremely important and we had to ensure the design and motion in our experience was a perfect re-creation of the experience on the actual device. We built the experience on top of the Razorfish Touch Framework. Using the framework allowed us to rapidly develop the application from scratch in under 4 weeks.

The product launch was a huge success and the Windows Phone team has been celebrating in Barcelona. The reaction from the press and blog community has been overwhelmingly positive. The conference is far from over but so far we are off to a great start!