Coding with a LEAP Motion — Greylock Hackfest 2014

Over the weekend I attended Greylock Hackfest (, and built LeapPad with my teammates Robert Ying, Aditya Majumdar, and Kevin Heh.

Disclaimer: This post reflects my views only.

About Greylock:

Greylock Hackfest was an amazingly well-run hackathon, hosted at Medium, a Greylock portfolio company with an extremely nice office. They provided an extremely well-done atmosphere for the hackers who attended, and didn’t feel compelled to crowd the space, so I felt as if there was plenty of space to stretch out and did not feel cramped at all. There can be something said for having a large hackathon atmosphere with 1000+ hackers, but I think Greylock had an excellent size and feel with only around 160 hackers in attendance (by rejecting 2/3 of the applicants). In addition, I was particularly excited to see @dpatil (djpatil) in person, since I had been following his twitter musings for a while now. He was as amusing as expected, serving as the MC for the top ten presentations and award ceremony. The top ten hacks were all amazing, and I think the top three were very well-deserved, which is a testament to great selection and judging at this hack. I would definitely encourage anyone eligible to apply for Greylock Hackfest next year, and go if selected.

About LEAP:

In this post I hope to document a few of the issues we ran into while developing for LEAP, with the hope that someday I can look back and appreciate just how much hands-free technology will have improved.

We built LeapPad, a Mac OSX interaction suite, using the leap motion. The GUI was designed in PyQt4, with the Mac OSX interactions enabled via automac and BetterTouchTool. We used LEAP motion SDKs with python bindings to interact with the LEAP. We developed with the beta 2.0 skeletal tracking APIs, which come with a neat set of bone abstractions which I will mention later.

LEAP is an extremely compelling device. It is almost magical to be able to wave your fingers in front of a computer screen and have the motion tracked, which is a technological feat that is nothing short of amazing. It has nice abstractions so that many things are provided for a developer, including finger tip joints and all manner of gestures, such as circles, swipes, as well as an easy way of thinking of the interaction zone above the LEAP by abstracting the zone into a box which maps the zone into a coordinate system, with points from 0 to 1. Velocity is provided in many cases as well, easing detection. Also, smoothing features can be found attached to many location apis such as apis for the hand and fingers, which adds to the available options for a developer to use. With so many available tracking options, the LEAP api becomes a powerful tool, and able to enable amazing visual demos, such as the visualizer that ships with the LEAP as a debugging tool.

The first problem we ran into was the fact that the python sample application didn’t run on my computer (OSX 10.9.2). After a good deal of debugging, we noticed a flag that controlled whether applications received LEAP actions while not in focus. Since I was running this from a console that was also running tmux, my application was, in fact, NOT in focus, and therefore any events from the LEAP motion were not sent to my application.
Adding this line to the initialization code fixed my issue:


After enabling the correct policy flag, the LEAP finally decided to begin sending the sample application data to my program. I don’t think this problem would have arisen if I had just used the typical Mac OSX terminal, but it was quite surprising to find I had a non-functional demo, and I spent about thirty minutes debugging the reason.

The next problem was more application-specific. Our intent was to create a virtual QWERTY keyboard in spirit, and therefore we had to deal with finger tracking. LEAP motion actions are provided to the consumer in frames. In other words, the controller will fire a frame event when tracking a new frame worth of actions. One then has a frame’s worth of processing time to commit actions before the next frame arrives. The frame consists of hand data, finger data, and their relevant information. LEAP actually captures fingertip data quite impressively, and therefore we attempted to design our keyboard to take advantage of this fact, by tracking fingertips. However, due to biomechanical restrictions on how we move our fingers, it turns out that tracking fingertips for key presses is a suboptimal plan. The problem lies with what is considered a key press, and the need to debounce multiple key presses. A problem that still plagues the application that we developed is the unfortunate consequence of multiple key presses occurring if the LEAP enters an undefined state. In our case, the LEAP enters such a state if it loses track of your fingers, which is a state that is relatively easy to attain since the bounding box of where the LEAP can actually see your fingers is not marked (since it is virtual space), and thus one may easily be inclined to move in a direction or manner that would take their fingers out of view. We attempted to resolve this problem by introducing a multi-phase key press tracker. First, a determination of whether or not a key press had occurred in the frame was done. Then, a determination of which finger did the pressing occurred. Finally, the finger set a flag that indicated that it had pressed a key, and this flag was reset on the first frame that the finger was no longer the one considered ‘pressed’. This carries the trade-off that continuous presses with a repeat rate would not be supported.  A possible mitigation for this would be a repeat rate timer, but we did not implement this. To do the first determination, we used an empirically-determined velocity threshold on the fingertip velocity, helpfully provided by the field:


The determination of the key being pressed was done by a comparison of the location of the fingers, and taking the lowest finger as the primary finger being pressed. The trade-off here was that we lost thumb functionality and had to map it to spacebar, since the triggering would often default to thumb when other key presses were intended.

Another problem with the keyboard is that the tip location presents where the user would like to click, if we draw the indicator markers according to tip location. However, the the tip moves sharply when a tap is expected, and thus the location data given on the frame where a key press is detected is unreliable. One solution would be to cache prior frames, but we felt our mitigation solution was superior.

We attempted to mitigate the problem by tracking from the joint of the metacarpal bone to the proximal bone instead of using finger tip location. We did this because we felt that this joint was more likely to remain stable during a keypress. Luckily, LEAP exposes the following function:


This mitigation allowed for more stable finger position tracking at the cost of not being able to accurately track finger separation, which is a decently major UX hit since it isn’t clear to the user what is being tracked, and having indicator dots that don’t reflect a user’s movement entirely can be frustrating. We made this tradeoff because we found that having stable, intended keypresses wins over somewhat chaotic user experience.

These are still relatively unsolved problems, since we only managed to get a mostly functional demonstration, but I think it definitely is possible to make a stable keyboard, and am very excited to see one come out in the near future.

Thank you to PJ Loury, who unknowingly (kinda) lent out his LEAP for hacking this weekend. I’ll get it back to you soon, I promise.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s