Gesture control and the QWERTY effect

MYO Controller/getmyo.com
How we interact with devices is evolving. We started with the keyboard for a century before we added the mouse. On mobile devices, it was the keypad so much so that I remember a tech exec playing futurologist to me around the turn of the century and claiming that the kids want devices that use a single thumb. Of course, we now use touch and its associated gestures and the thumbs are important but not critical.
There is, of course, also speech. Advances have been made here. They are impressive too. But something is holding us back from really talking to our devices. There is a feeling that we should be doing this but somehow we don’t quite know what to say. The equivalent of intuitive gestures for speech on devices just isn’t there.
On gaming devices, the controller held sway until there was some evolution (mostly in a complementary way) to the Wii Remote with its risk of ‘flinging damage’ to the Kinect that finally nailed the idea that your devices can watch and interpret you. Recently, that idea looks like going further with LeapMotion, a small prism that sits in front of your computer with Kinect like functionality; so much so that you can pretend your computer operates with a touch screen without ever touching. The ultimate image is eventually of Tom Cruise in Minority Report sorting through evidence with an elaborate gesture filled dance. However, what is often forgotten is that Cruise had to put on a seemingly personalised glove to make the whole thing work.
This past week has seen two alternatives emerge. One is Google Glass. The long anticipated wearable eyewear received additional attention as those outside of Google got to play and review it. This is a new way of interacting with the mobile Internet but also cameras. The other is from a Y-Combinator backed start-up and Toronto Creative Destruction Lab member, Thalmic, and is called MYO. Unlike Google Glass, this is a wearable technology designed to allow gesture control of devices. But like Google Glass, it is worn and personalised somewhat to the wearer.
MYO is an armband that when you put it on senses the muscle movements in your arm which are themselves a consequence of movements in your hand and fingers. If you watch the video you will see some of its capabilities. It can mimic touch screen gestures but it can also give you ‘Force-like’ control over objects including a quadro-copter. But it also allows a more natural interaction with console computer games — in particular, firing a weapon which has always been fairly unintuitive using a controller. But think also in terms of how it could improve your golf swing and train muscle memory. There is a great deal of potential here.
But what interests me is the potential battle between device-specific interfaces (like LeapMotion) that sits with a device and person-specific interfaces (like MYO) that sits with a person. Each has its own pros and cons. Device-specific interfaces can be calibrated with gestures that operate on that device much like a keyboard still does. That means that in order to be powerful, when different people use the device, they have to know what to do. This is perhaps why the QWERTY keyboard still dominates. If you were to only use on computer and no one else used it, you could tailor the keyboard to any configuration you wanted. But the need to use the computers of others and for them to use yours locks in the old standard.
The same is true of gestures for any device-specific interface including touch screens. Standards evolve so that any one can pick up that device and know how to use it. On touch screens these gestures are becoming common-place. It is amazing to think that ‘pinch to zoom’ or ‘swipe to flip’ were not there more than five years ago but that in a century they will likely still be exactly the same even if (and it is hard to imagine) there is some better approach in the future.
For personal-specific interfaces, this is different. Both Google Glass and MYO will operate best when calibrated to the individual. The same is a little true of Wii Remotes and Kinect now although they have worked hard to be more device-specific. That said, each game usually requires new training and even the intuitive control of a television remains a dream. For personal-specific interfaces, the individual can teach the interface how it likes to interact with devices. Maybe when playing a shooting game you like to use a trigger on a gun or maybe you prefer a button that you would find on a Star Trek phaser. The point is that your personal interface will know this and so gestures can be specific to you.
This has important implications for the entrepreneurial strategies of these new start-ups in the interface space. For LeapMotion, the goal is to capture devices one market at a time and become pre-installed. For that, they need to ensure that QWERTY like intuitive gestures and standards evolve. Their SDK for developers is a way of generating that.
For MYO, on the other hand, there is no necessary goal of pre-installation. But their challenge is harder and more akin to the problems facing voice control. For voice control, there are no proper standards. We know how to speak to people but Siri annoys us when it understands some things but not others. The same will be true of MYO. If I am going to wear a device all of the time, I want to be assured of communication ubiquity.
But there is a specific place that MYO may gain significant ground. That is console gaming. If we forgot MYO’s other potential and just thought of it as a better Kinect and Wii Remote that only wear when you play, then its strategy becomes obvious. Get adopted as the new controller for a console game (or maybe more than one). That will build the market until such time as a wearable set of standards emerges. Ironically, just by moving from the device to the person can mean that strategy can move away from people and back to those who sell the devices.