Yes, my Leap Motion dev unit has finally arrived and was simple to setup and use. There was definitely a major difference in reading about the device and actually seeing it in action. I’ve mostly been playing with the visual tools today and taking notes on how it reacts to my hands. The shot above shows only one hand but it can definitely track two of them well.
Today I’ve been twisting, turning and doing my best personation. Yep, I look like a crazy person at work but it has definitely been very fun and educational. I’ve browsed the official forums and have been reading the provided documentation for the api. I believe that I’m simultaneously more limited than I originally believed I would be and also much more free at the same time.
For example, the Leap rightly claims 8 (2x2x2) cubic feet of interaction and to be many times more accurate than a Kinect. Both of those claims are true (based upon what I know of the Kinect and my current experience with Leap) with the second one having a slight bit of subjectivity added to it. I actually feel that the device captures a larger area than it claims, I’m going to have to buy a ruler or measuring tape to confirm this. The site does tell developers that the consumer units will be different from what I have available to me now and that their range will be about 20% greater than what I’m able to experience. That’s awesome.
The 8 cubic feet range of the device sounds limiting but it isn’t. Sure, the Kinect can grab your whole living room and that is an awesome feat. The Kinect is designed to be an input method that’s meant to be part of 10+ feet user interfaces for games and software. The Leap Motion focuses on a smaller, much more defined area that is very precise in its’ tracking. I wouldn’t expect to see this on an Xbox but maybe on an Xbox controller if it was going to be in a console.I expect someone to combine this little Leap with the equally tiny Raspberry Pi and some wireless display tech to make an armrest remote control or something similar this year.
The expected use cases seem as if they will be a more personal, intimate set of experiences for a single user. That would explain why they’re more focused on getting them bundled with laptops than desktops. Not that it can’t work at desktops, that’s how I’m using it now. I don’t know about you but I perform weird gestures without thinking while I web surf. Sometimes I’ll raise my hand a bit then motion in the air quickly with my index finger as if I’m moving the web page up or down. Sometimes I slap pages left and right.
Of course it all does nothing. I somehow had this expectation of eventually being able to perform real gestures since I’ve become accustomed to the mouse gestures of Opera. That’s now possible, very possible. Developers are already working on mouse code with some of them having released quickie projects publicly. Real gesture recognition while web browsing will probably become the most mundane (although not any less useful) application for the device by the end of the year.
The data that the device picks up really will allow for your hands to become a complete input mechanism. That screen at the very top of the post has accurately tracked my fingers and has shown a sizable span that’s represented by the sphere. I won’t dive deeply into it in this post but I believe that there are places for the Leap to be used naturally in our computing lives that people will value if developers can present the proper software to users.