At a recent 3D Vision & Kinect Hacking Meetup in San Francisco, I had the opportunity to learn about a particularly exciting advancement in orientation sensor technology. A demonstration by Michael Kosic of XYZ Interactive focused on low cost 3D positioning sensors that not only detect the x, y and z position of an object, but also roll, pitch and yaw. What’s particularly exciting about their technology is not what it can do to enable interactive experiences, but how its low cost could make these experiences ubiquitous. With positioning sensors that cost as little as a cup of coffee, you can imagine how their affordability could open the door to a wide array of experiences including interactive signage, in-vehicle controls, gaming and augmented reality applications. Simple yet precise recognition of the position of a hand in 3D space could be used for swipes, pushes and pulls—the kind of navigation we expect in a tablet or track pad. And the roll, pitch and yaw recognition opens the door to completely new, yet easily-adoptable user interfaces. It’s technology we’re already experimenting with in our labs, so stay tuned.
Today Christopher Heine of AdWeek published “Razorfish’s Atlanta Lab Focuses on In-Store Digital” highlighting the Emerging Experiences Lab as a multi-faceted innovative space equipped to continue tackling the changing retail landscape.
Regarding a recent report, Heine concludes:
Bottom line, retailers need to do more than simply slap digital elements into their locations… they need to create seriously-planned interactive customer experiences.
Razorfish’s Emerging Experiences lab is a mind-blowing candy store stocked with seamlessly connected technologies that facilitate the creation of magic moments for guests. It provides an immersive physical space that clients can leverage to strategize, implement, prototype, and employ these interactive experiences for their customers.
From concept to completion, the Emerging Experience Practice is a one-stop shop for clients looking to collaborate with a team of committed, enthusiastic specialists to ultimately create custom solutions that are grounded in the reality of business. The Lab is a unifying space not only for emerging technologies, but also for designers, developers, strategists, and stakeholders too.
In the Lab, all of the walls come down. Traditional barriers between agency and client as well as client and customer are removed. Technology recedes in and out of view through the cycle of creation as it integrates with thoughtful experience touch points.
The results of this one-of-a-kind mix? Solutions that are sustainable and occur as a natural result of discoveries during the envisioning process.
It’s always so exciting when a client visits the Lab for the first time. By experiencing the possibilities in a physical space, the client is inspired by this type of thinking and how it relates to their business. Subconsciously, authentic consumer experiences begin to occur.
The sensory nature of the Lab helps foster the most compelling and innovative ideas possible. It is something that can not be achieved by observing a focus group or relying on evolving data.
It’s brainstorming at its finest. And prototyping at its fastest.
Clients can experience their customers’ point of view in a way that was once never possible.
Razorfish is committed. Our team members are committed. All of the chips are in and the Lab is situated as a crucial space to help our clients realize and understand the needs of today’s customers.
There are few things sadder than a pile of old technical books. They live on dusty bookshelves and in torn cardboard boxes as testament to the many things we never accomplished in our lives. Some cover fads that came and went before we even had time to peruse their contents. Others cover supposedly essential topics we turned out to be able to program perfectly well without – topics like algebra, geometry and software methodology.
The saddest thing about old technical books is that by “old” what we really mean is anything published more than three years ago. We no longer burn books in civilized countries so these 3+ year old books simply take up space. We can’t throw them out. We can’t sell them on eBay. We can’t even give them away.
Take for instance the New Masters of Flash series. These are first of all beautifully designed books. They are written by a slew of masters of the technology who are each given a few pages to discuss their inspirations, provide a cool concept and then show how they approached the solution. Cool concepts include animating a 3D chessboard, animated typography vis-à-vis The Matrix, creating a pointillism artistic mask for text and images, and taking a simple shadow effect to its logical extremes. The highlight of the book is probably Irene Chan’s introductory essay on feminism, art and the role of websites. It’s not something one would expect to find in a technical book and speaks to the amazing community that developed around Flash.
All of this is simply a way of observing, once again, that plus ça change, plus c’est la même chose, even in software where we often pretend that we are in constant Kurzweillian motion and slouching toward the Singularity. It is also a recognition of the essential role Flash has played in interactive media. Flash has shown us what can be done and, in many cases, we have yet to surpass what it accomplished all those years ago. Flash is dead. Long live Flash.
In mid-March, tens of thousands of music lovers, film fanatics and tech junkies descended on Austin, Texas for the annual SXSW festival. This year, we were honored by being invited to come and participate on a panel discussing technology and the future of the in-store experience (official panel info). It was an exciting opportunity that we hope to be asked to repeat in the future years of this prestigious festival.
Its been amazing to watch the festival’s success and attendance sky rocket during the last decade, and the expansion into the interactive industry has been a huge factor in that growth. To say attendance was high is almost laughable – the city was brimming with people, all ravenously seeking out and consuming inspiration for their passions in the forms of discussions, installations and shows. It was really a highlight in our history to be part of that momentum.
There were a lot of very engaging discussions – from Foursquare CEO, Dennis Crowley’s, keynote discussion on how their platform continues to evolve and stay relevant, to the “new buzz” around passive-location app rookies such as Highlight and even some really amazing (and fairly alarming) thoughts from Ray Kurzweil on the democratization of technology…and our imminent replacement by cyborgs. (YAY future!)
The speakers I had the pleasure of joining on the panel were Carrie Chitsey, Founder & CEO of 3Seventy, Tim Austin, CCO of TPN, and Chris Harrison, panel moderator and COO of DMX Inc. The panel focused primarily on the current landscape of retail – both in-store platforms and exterior experiences such as web and mobile/tablet. A lot of discussion was around the tech that is in the market today – QR, mobile, RFID, Augmented Reality, Multitouch – and what we saw on the horizon – NFC, 3D Video Projection, furthered AR and, most importantly, the convergence of these experiences into a connected, holistic platform.
We’ve seen amazing examples of Augmented Reality and Video Projection as jaw-dropping attraction mediums and fun, environmental experiences (think Nike’s Melo Event or RockStar), but how can we utilize this tech to drive purchasing decisions in-store or from a shopper’s living room. One of the larger advancements we saw at CES this year was in the Virtual Dressing room category and how augmented experiences like Body Metrics are impacting shoppers’ decisions while reducing return rates for online retailers at the same time.
However, while this solves ‘online’ shopping pain points for both retailers and consumers, it also creates potential potholes in the path to in-store traffic since the online experience is that much better. This then puts the heat back on brands (and us as marketers) to elevate the in-store component of our model to provide meaningful, inspiring experiences for shoppers so they actually visit the store in the first place. So what does this mean for the marketplace?
It means connecting with customers’ senses of individuality and personal connection with brands. It means empowering the sales staff with tools and theatrical platforms to engage in a higher level of customer service with shoppers. And most importantly, it means ensuring that these offerings weave together to form a cohesive story across all the touch points that form the overall journey from storefront to shopping cart. Our team recently developed a platform, code-named 5D, that connects shoppers with devices and one-of-a-kind experiences like never before.
Lastly, we also discussed the responsibilities we have as agencies, brand ambassadors and shepherds of our clients’ interests to make sure we are not just pushing tech for tech’s sake. There have been far too many failed retail experiences due to the fact that they were simply off-target from the business goals of the retailer, inappropriate for the store’s customer, fledgling technology that needed to be incubated a bit longer or all of the above. QR, for example, is so easy to implement, that every able marketer over-saturated their materials with a QR extension, delivering a poor user-end experience once the consumer actually went through the hoops of snapping the code. This has really eroded the effectiveness of QR as a connection medium and left a sour taste in most peoples’ mouths when they think of QR. Now, at a time when QR’s potential is really peaking through its ability to quickly connect platforms and personal devices, we are finding ourselves having to resell the tech all over again since it wasn’t used appropriately by so many marketers the first time. As an agency, we must always envision our experiences with attention to core business strategies, while at the same time designing consumer services that support the shopper. It is definitely our job to disrupt the marketplace with ideas, but ideas that are tactful and meaningful for brands and shoppers alike.
At the end of the day, or the panel rather, we all agreed that the point is this: products support the experiences we create. Therefore, these experiences should always support our consumers’ lifestyles as well as the business goals of our clients. They must be meaningful and magical to impact a cluttered landscape that’s piled high with shallow executions and disparate messages. Emerging technology is a powerful medium to break through all of this noise and tell compelling stories, but only if it adds value on both sides of the fence. The consumer story is the brand story these days – period -and personal devices + emerging technology is at the center of it all. We must strive to utilize new opportunities with new technology to educate and inspire the people that fuel this trillion dollar industry, but not squander business dollars and consumer energy in the process.
Ever since Microsoft started leaking details about its upcoming version of their flagship product, Windows 8, there has been firestorm of controversy among Microsoft’s faithful. Many Silverlight application developers and publishers feel like they have been willfully misled into investing in a technology that Microsoft is now apparently abandoning. Many IT Pros dislike and fear the retraining efforts they will have to make with new Start Screen and other Windows shell changes. Finally, many ASP.NET web developers don’t see how Windows 8 relates to them despite the fact that Microsoft is adding “WinJS”, a runtime that allows Web developers to leverage their existing skills to build native applications. On the ground, it may seem like things are going badly for Windows 8, but with a little developer ingenuity and a lot more communication and documentation from Microsoft, Windows 8 could be the product that saves Microsoft from being a victim of their own success.
Take the Start Screen for example—In order to finally enable OEMs to build devices that can truly be considered a “Tablet PC”, Microsoft has to provide a way for users to launch applications. One might be tempted to think that the Start Menu in Windows 7 could be adapted to serve this purpose, but fingers are just not good at tapping on small icons or icons that are densely packed. Making the Start Menu a full screen experience is really the only way to get enough space to create a truly usable touch-optimized experience. We in the Emerging Experience group have known this for years as practically every single touch based application that we have built has been a full screen app. On top of this, Microsoft’s Start Screen’s animations are extremely fluid and natural, and so to us it seems like a natural platform from which to launch our showcase applications.
To give a little background about ourselves and our applications, we are a technology agile group, which means that we use the technology that creates the best experience for our customers. Many of our apps are built using WPF but we also have apps that are built using Flash. Obviously attempting to port our applications would not be a good strategy for the Flash apps, but even after a brief investigation, I quickly decided that attempting to port all of our WPF applications was a non-starter. The Metro APIs are far too different and who knows if, after porting the WPF applications, I would even end up with an app that worked? The solution, it was decided, was to leave the existing showcase applications as they were but to simply create live tiles for them so that they could be invoked.
The problem with this solution is that it is not possible to really take advantage of the Live Tile infrastructure from a Win32 app. In a Metro-style (WinRT) application you can supply different resolution images for the tiles by altering the AppX manifest, but Win32 Applications don’t have AppX manifests. It might seem trivial to simply create a WinRT application that upon launch invokes one of our showcase applications, and to use the WinRT app’s AppX manifest to customize the Live Tile, but unfortunately the relationship between WinRT and Win32 is significantly more complex than that. First of all, WinRT applications can call some Win32 APIs, but it explicitly cannot create new processes—this is part Microsoft security model for WinRT apps. On top of that, even though WinRT apps can call many Win32 APIs, many of those calls either fail outright or fail to have the desired result. Clearly this is an area where Microsoft can do a much better job in providing documentation.
To work around these limitations, I decided to create a WPF application that lives in the System Tray as a notification icon. The entire purpose of this WPF application is to listen for network calls and then launch and activate the requested application. At this point our WinRT “launcher” application was simply responsible for initiating the network call and then close itself down.
While this worked beautifully in the debugger, I was surprised to find that it did not work once the applications were freed from the debugger. Sure, the Launcher application still made a network call to the WPF application and the WPF application still launched the showcase application, but the showcase application was never displayed. The problem, it turns out, is that the Win32 function “SetForegroundWindow” on which my WPF application was indirectly relying behaves differently if the calling application is being debugged. Clearly the Windows shell makes use of a facility to show the desktop when the user clicks on the Desktop tile in the Start Screen, but when I asked Microsoft about this and SetForegroundWindow, I was essentially told that this was by design and that only the end user should control which window has focus. I understand the wisdom of this decision, but this answer didn’t get me any closer to being able to launch our showcase applications from nice looking Live Tiles.
While I wouldn’t propose that developers do this in production applications, Windows 8 isn’t a production OS itself—and I still hold out hope that will make this whole endeavor moot by they release Windows 8. With the disclaimer in effect, the way that I solved this problem was to create a third Windows Forms application whose sole responsibility is to run CDB, the command-line debugger, and automate it to launch and attach to the WPF application. Because the WPF application has a debugger attached, it is now able use the SetForegroundWindow API and the entire system works as expected. In fact, by not creating a Window in the Windows Forms application and launching CDB without a console window the entire hack is invisible to the user and everything transparently works as expected.
As we approach the one year anniversary of the Kinect launch, Microsoft has announced that the Kinect for PC Commercial SDK will be released in early 2012 (http://majornelson.com/2011/10/31/xbox-360-celebrates-one-year-anniversary-of-the-kinect-effect/). More than 200 businesses worldwide, including Toyota, Houghton Mifflin Harcourt and Razorfish, are involved in a pilot program to explore the commercial possibilities of the Kinect.
Until now, most companies working with the Kinect have been working within the constraints of a research license for the Kinect SDK. Consequently the applications that corporations have been working on have been restricted to tightly held private projects or, at most, proof-of-concept projects visible only as demo reels on the Internet. While most people are at least aware of the Kinect technology, the terms of the research license has relegated it to being an afterthought or something only understood at a distance – a nice to have.
The recent announcement of the timeline for the commercial license implicitly green lights these projects to make preparations for releasing Kinect-enabled applications for everyday use. Over the next year we can expect to see the Kinect as a ubiquitous part of our daily environments and something just as prevalent as interactive kiosks are today. The spread of the Kinect beyond the living room may be as dramatic as the proliferation of smart phones or tablets – one day no one knew what they were and, the next, everyone seemed to have one. In boardrooms across America, the question will no longer be one of whether to have a Kinect strategy but instead what that strategy is.
As the Kinect becomes more prevalent in our daily lives, the possibilities and limitations of the Kinect will undergo much closer scrutiny. The potential offered by a mass produced device that provides a video camera, an infrared depth camera and a four microphone array with beamforming capabilities is vast. The technology can be taken in multiple directions including computer vision in robotics, 3D modeling with multiple linked devices, inexpensive augmented reality, hands-free interactive experiences, speech recognition based in-store assistance and innovative computer assisted learning.
That Microsoft’s visionary strategy in designing the Kinect has revolved around off-loading processing to the operating system rather than building it solely into the hardware means that complex scenarios not currently supported by the Xbox can be made viable through improved software and processing power on computers and video cards, the price of which are constantly falling. Microsoft’s Kinect technology is actually scalable and does not require improving the Kinect hardware itself but, instead, on simply improving the software that processes the data streamed by the Kinect.
This all leads to the inevitable question – what is the future of the Kinect? After a year, what are second generation Kinect applications going to look like? The answer depends on where Microsoft takes Kinect software going forward. The current research version of the Kinect SDK beta shows its roots in gaming. The visual processing, depth processing and even acoustical models are tied to the limitations and optimizations required for the Xbox 360 gaming system. They all work best in a room about the size of your living room and even begin to have troubles in small apartments. The microphone array seems to work well in standard rooms, for which it has painstakingly been optimized to deal with surround sound speakers and audio reflections off of furniture, but appears to have trouble in large spaces.
Strikingly, even though the depth camera is capable of 640 x 480 resolutions, the current SDK only provides access to 320 x 240 image streams. The Kinect SDK, likewise, does not provide depth data information for objects within 800 mm (about 2 ½ feet) of the Kinect sensor even though the camera does capture this information.
There are clearly performance reasons for setting these limitations. However part of the problem also appears to be related to the fact that the USB connector for the Kinect is a bottleneck and has been throttled for the particular USB controller configuration requirements of the Xbox. As the Kinect moves out of the living room and into the real world, it makes sense to leave the restrictions imposed by tying the Kinect SDK to the Xbox behind. If we can use improved software running on improved hardware to boost the capabilities of Kinect for PC applications, it would be a shame to have a gaming infrastructure be the main showstopper.
Nowhere is this more clear than when we consider using the Kinect in the office. As a Kinect developer, I have to slide my chair back and away from my monitor whenever I want to debug a piece of code. Fortunately I don’t work in a cubicle and have some open space behind me. I am also fortunate that my chair has wheels and I have the code – slide – code routine down pat. However I don’t see anyone wanting to use a Kinect-enabled business application in this way. Unlike the living room, which is the natural space of our home lives, the office environment of our work lives is generally cramped and close to the screen with just enough room for a keyboard between us and our monitor. We are always within two and a half feet of the objects we work with.
Yet the workspace is one of the chief places we want to see our Kinects working. And instead of large arm movements, we would like to wave our hands or snap our fingers in order to make things happen on our screens. We want The Minority Report writ small. In order to achieve this, in turn, we need to move beyond skeletal tracking and start enabling fine finger tracking.
Along the same lines, for larger movements, the skeletal tracking capabilities of the Kinect only work with the full body. At the office, sitting in our office chairs, we typically never see anything below the waist. Even skeletal tracking, then, needs to be modified to take this into account and to support partial skeleton tracking at the software level.
As the Kinect is being allowed to travel beyond our living rooms with the upcoming release of the commercial Kinect SDK, the software that allows developers to build applications for the Kinect needs to cut its strong dependence on gaming scenarios. This is the natural future for a technology that is maturing. This is where the Kinect is headed – not only out into the world but also up in our faces. We want and need to get closer to the Kinect.
Before the BUILD conference, the one thing we all knew was that Microsoft needed a multitouch tablet strategy to compete with Google and Apple and in order to maintain the future viability of the Windows operating system. What we were not sure of was how Microsoft would achieve this goal while preserving backwards compatibility for all of our previous Windows applications in the office as well as in the home. The challenge at first blush seemed insurmountable: provide something completely new to the Windows world while preserving everything that went before.
At BUILD, Microsoft revealed that they have actually accomplished this goal by providing what is basically two side-by-side operating systems. They have also signaled that the primary challenge for application creators going forward will not be technical but rather design-focused. Microsoft, which in the past has tended to side-line user experience, now puts design front and center with “Windows 8.”
One of the two Win8 interfaces is a slightly souped up version of Windows 7 that looks familiar and runs just about anything I could think to try installing on it: Zune, Dropbox, iTunes, Kindle for PC, and even the software for the Kinect SDK. “Windows 8” ran each of them without complaint. The desktop shell works best with a mouse and keyboard, though it also supports and has been redesigned to support multitouch also.
The other is a Metro inspired immersive experience that works best using touch. Instead of an explorer based file system with icons, the Metro shell is designed around interactive tiles, familiar from Windows Phone 7, that launch discrete apps. The Metro shell revolves around a new Windows Store (the equivalent of the iPad’s App Store and WP7’s Marketplace) that allows consumers to download games and apps.
One could easily think of this as two solutions in one: a consumer platform designed for the tablet and a desktop platform designed for the PC. What is unusual about these side-by-side solutions is that, with the flick of a finger, the tablet user can bring up the desktop UI and the desktop user can bring up the Metro UI. The two operating systems are not something one configures through the control panel the way one might configure a background theme. Instead, both UIs are effectively always alive and always immediately accessible.
Microsoft generously provided each attendee with a new Samsung tablet installed with a pre-Beta build of “Windows 8” and accessorized with a wireless keyboard, a stylus and a dock. The dock is by far the most intriguing – and least discussed – piece of hardware provided as it offers an indication of how Microsoft envisions “Windows 8” being used in the future. A tablet may be inserted into the docking station with a monitor and mouse in an office setting, at which point the desktop UI can be brought up and the user has an experience for the most part indistinguishable from what he is currently used to. The tablet can then be undocked and switched to the Metro style with all the previously running applications still running.
Initially the expectation is that the .NET tools of the past ten years will be used to write business, productivity and data-entry intensive applications while the new tools will be used for games, social apps and everything else one might expect to find on a smartphone or an iPad.
In a mixed-OS experience like the one described above using a dock, a more likely setup would be a full .NET style business app for the desktop with a lighter-weight Metro style version of the same app on the Metro UI. This allows users to quickly switch back and forth between a tablet and a desktop scenario using the same device. The test of this will likely come when Microsoft reveals its plans for Microsoft Office. We would expect Microsoft to provide both a classic and a Metro version of their Office suite. How well they implement this will in turn provide a roadmap for how other vendors will cater to the enterprise in their software solutions. In other words, will “Windows 8” for the enterprise have enterprise applications for the desktop only or for both the desktop and for the Metro UI.
There is a third possibility also. It may be possible to build full enterprise applications targeting Metro only. The WinRT platform combined with Microsoft’s Azure offering supports this.
The challenge in creating sophisticated apps for Metro is not primarily a technical challenge. It is primarily a User Experience challenge. Can we create multitouch enabled data grids? Can we come up with new navigation patterns to replace the standard enterprise application with hundreds of unique windows? Can we find ways to create great experiences that combine both multitouch and keyboard interaction?
While Microsoft has been tagged with a reputation for not understanding UX over the past decade, this has seemed to change. At BUILD, the speakers were all aware of the importance of UX while speakers like Jensen Harris demonstrated that Microsoft not only knew that UX problems were important but that they also had the chops to solve them. In this context, BUILD has been a watershed event. If Microsoft has tended to admire and promote smart programming in the past, after BUILD it will become more important to be savvy about design. The days when design could be dismissed as merely prettying up an application are over. After this week, design on Windows is front and center. This is good news for agencies like Razorfish which are strong in both design and technology. It will be a challenge for software consultancies that have only been paying lip service to UX until now as they attempt to establish themselves as Metro experts.
On the technology front, as mentioned above, Microsoft is supporting three platforms: one targeted at C++ developers, one at XAML developers (Silverlight and WPF), and one targeted at web developers. The tack of using web technologies for building native Metro apps for the “Windows 8” tablet currently makes the most sense. A common path for developing apps for multiple platforms like the iPad and iPhone, Android and Windows is to first create a web application that can run on all these platforms, then after looking at web analytics data and determining which platforms use the web app most, building native apps for each of the top platforms. In the case of the Metro UI, it will be easiest to port code from web apps to the native web development tools on Windows 8 rather than attempt to build a brand new project in either C++ or XAML. Again, this type of development is already familiar to digital agencies but likely to be a challenge for other organizations.
BUILD also quietly announced improvements and fixes to WPF in the new .NET 4.5 framework being released with “Windows 8.” This is exciting for the Razorfish Emerging Experiences group as WPF is our main development platform for Surface applications and multitouch experiences.
The story for Silverlight is a bit more ambiguous. Currently “Windows 8” offers two different versions of IE 10 – one for the desktop UI and one for the Metro UI. The Metro UI version does not support plugins. Consequently neither Flash nor Silverlight applications will run in Metro IE. This is a difficult position since it entails Silverlight does not work as a multi-platform solution even on “Windows 8,” i.e. it supports only one of the two Win8 platforms. Silverlight-out-of browser is still viable on the desktop UI. It must compete, however, with both WPF – which is more feature rich – and WinForms – which has a significantly larger developer base. It has been suggested that the main benefit of Silverlight as a desktop technology solution will be that, since it, like XAML for WinRT, is only a subset of WPF, this will make things easier when it comes time to port an application over to the Metro UI. In porting from either Silverlight or WPF, however, some rewriting will have to occur as XAML for WinRT actually introduces interesting new XAML features – such as markup for localization – currently missing from both Silverlight and WPF.
There are still several open questions remaining with regard to Windows 8. Two have already been mentioned:
1. Is the Metro UI for the enterprise or for consumers only?
2. What are Microsoft’s plans for Microsoft Office?
A third open question is What are Microsoft’s plans for Windows Phone? While the Samsung tablets given to attendees at BUILD are Intel-based, Microsoft’s ultimate goal is to provide an ARM-based version of “Windows 8.” The great advantage of an ARM architecture is that it allows “Windows 8” to be placed on a variety of hardware platforms including smart phones. Currently, however, Microsoft has dropped no hint that it plans to release “Windows 8” phones, however, and the concern would be that such an announcement would damage sales of Windows Phone 7.5, which will be released sometime in 2011. On the other hand, it doesn’t seem to make sense to have completely different operating systems for the Microsoft tablet and Microsoft’s phone. Apple has benefited greatly by having one OS for both form factors and Microsoft strategists, no doubt, are well aware of this.
The arrival of NFC technology promises to usher in a variety of new types of multi-channel customer experiences. While NFC technology is still in its infancy, our team has focused our efforts on research & development around experiences that can be enabled by this emerging platform. One of the many uses of NFC is activating mobile payment.
The Razorfish Digital Wallet is a mobile application we developed to demonstrate how customers can send and receive mobile payments over NFC. In the future, this type of consumer-to-consumer payment will become commonplace. For instance, you’ll pay your babysitter or settle a bet with a friend by simply tapping your mobile devices.
In the above video, we’ll showcase the consumer-to-consumer payment scenario along with a variety of other scenarios. NFC has arrived and we’re excited to integrate this technology in our experiences.
Check back soon as we will be posting a behind-the-scenes walkthru of the application.
The Razorfish Emerging Experiences team showed up in force for the ReMIX South conference. Luke Hamilton presented “The Interface Revolution”, a discussion about emerging tablet technologies and what they mean for consumers. He also provided best practices for creating tablet experiences and key insights on how to bring these interfaces across multiple devices. Jarrett Webb presented “An Introduction to Kinect Development” providing insight on how to get started building experiences for the Kinect hardware. Steve Dawson and Alex Nichols were “Kinecting Technologies” which recreated scenes from famous Sci-Fi movies utilizing the Kinect combined with other advanced technologies.
While not presenting at the event, the team enjoyed presentations by Albert Shum, Arturo Toledo, Rick Barraza, Josh Blake and many other experts in the fields of Kinect, Tablet/Mobile development and UX/Design.
For those who are interested, we encourage you to download the code for the Kinecting Technologies presentation. In order to run the samples, you’ll need:
- Kinect for Windows SDK
- Microsoft Speech Runtime (x86 version)
- Microsoft Speech SDK
- Kinect Language Pack
Thanks go out to the organizers of ReMIX South for putting together a wonderful event. We’ll see you next year!
Watch the session videos here.
This year, Razorfish sent several of our people to MIX 11, the annual Microsoft sponsored conference in Las Vegas for developers and designers.
So much happened during our week at MIX that it is difficult to summarize it all thematically. There were announcements and sessions on several major topics: IE9, HTML5, ASP.NET MVC 3, Silverlight 5, Windows Phone Mango release, and the Kinect SDK. In addition, there were also appearances from MS Surface v2, Windows Azure, oData and Sharepoint as well as a remarkable set of UX presentations.
The word on everyone’s lips seemed to be fragmentation, whether in reference to the expected HTML5 compatibility issues between future browsers (which the emphasis on the IE9 “native” browser experience only exacerbated) or to the greater array of Microsoft development technologies fighting for developers’ attentions.
What the four Razorfish attendees at MIX saw, on the contrary, were patterns of evolution. The much ballyhooed struggles between the Windows Team and the Development Team inside Microsoft for the future of HTML5 and Silverlight indicate to us that Microsoft can still respond to a rapidly changing worldwide technology ecosystem. When a product is struggling in the niche it was doing fine in a year ago, it can be refitted to survive in a new niche. Such is the case with Silverlight, originally intended as a Flash-killer. Silverlight developers never truly adopted the original Flash-killer strategy and instead used Silverlight to develop more sophisticated and interesting line-of-business applications. The problem is that LOB applications do not really belong on the web. They belong behind firewalls. The lack of casual games written in Silverlight likely affected the ability of Silverlight to force downloads and gain browser share. So instead, the strengths of Silverlight are being moved to the desktop as well as specialized platforms such as Windows Phone, the XBOX (?), and possibly Windows vNext.
WPF, which was once the pre-eminent desktop development platform, is in turn becoming a specialized tool for NUI development for multi-touch, Surface and Kinect. The announcement of the Kinect SDK itself demonstrates Microsoft’s continuing ability to innovate and surprise. It is, in the best sense of the term, a fortuitous mutation.
This all leaves HTML5 as the preferred technology for the web. We of course see the early signs of browser compatibility issues. At the same time, though, we have each been through this before and survived. The extra gyrations developers will have to go through will, in the end, provide the illusion consumers desire – that the same application can run similarly on any operating system and any device. As one MIX speaker put it, “The technology you use impresses no one. The experience you create with it is everything.”
Speaking of devices, we are excited to see that the WP7 team is not only going for parity with other smart phones but is firing warning shots across their bows with the much touted Mango release. Features we’re used to like multitasking are being expanded beyond current implementations with updating live tiles and “Live Agents” which allow for more full-featured multitasking.
There was naturally some complaining about the placement of various keynotes and sessions. With the multiple announcements and cross-blocking sessions, isn’t there a danger that individual messages will get drowned out in the general cacophony? We find that the panoply of conflicting viewpoints is one of the chief charms of MIX. Microsoft is not Apple. To borrow from Isaiah Berlin’s famous title, Apple is the hedgehog that does one thing well; Microsoft is the fox that explores all avenues and experiences. The great strength of Microsoft is its ability to challenge developers and create new harmonies out of these encounters. Should MIX ever be split up into different web, Silverlight, Windows Phone and UX conferences, we would all be poorer for it since all we would ever get would be our own opinions reflected back on ourselves – an echo chamber effect that will only serve to make us all deaf.
The overall quality of all sessions and boot camps were extremely high this year. In the past, we have been happy with a 60% success rate on talks. This year roughly 85% of the talks rang our internal bells. Certain sessions deserve a special shout out, however.
While all the UX lightning talks were extraordinary, August de los Reyes’s 21st Century Design (10’ 45”) talk took it to a different level. In the live session, the slide deck itself was the star with the brilliant August narrating it much as Peter Jones was the voice of the book in the old Hitchhiker’s Guide to the Galaxy television series.
Despite its inauspicious title, Ivan Tashev’s talk Audio for Kinect revealed what a truly remarkable device the Kinect really is. We honestly didn’t understand half of the technical stuff and we became queasy when formulas started flying across the screen. What we learned, though, was that only a fragment of the Kinect’s full audio capability is currently being used. Dr. Tashev demonstrated the ability of the Kinect’s audio algorithms to pick out two separate speakers, one reading Chinese and the other reading Korean, and separate them into different channels. All of this cool functionality will, moreover, be handed over to developers when the Kinect SDK beta is released at the end of spring.
Finally, we cannot say enough good things about Luis Cabrera and his willingness to demonstrate the Surface 2 at work in A Whole NUI World. Razorfish, of course, has a special affinity for anything Surface. What was outstanding in this presentation was not only the beauty and power of the new Surface devices but also the amount of thought that has gone into the tooling. Kudos to the Surface team, they’re reaching for a goal that is more than just a new technology but a new way for people to interact with computers and each other.
By the end of MIX, we were all quite exhausted mentally and physically. It may take us a full year – until the next MIX – to finish ingesting everything that we learned and experienced at MIX11.
So long, Microsoft, and thanks for all the Kinects.