Testing out the Leap Motion

Recently I got the opportunity to try out a Leap Motion and play around with SDK that is currently only available to select developers. I am borrowing it from a friend of mine who is one of the lucky people that made it into the developer program. It is an impressive piece of hardware. It is extremely small and light weight, the frame rate is really high and it is incredibly accurate. Before I happened to meet someone who would let me borrow theirs, I had a lot of questions about how it worked and development so I thought I would share what I’ve learned.

The Leap Motion. The three reddish lights are the infrared LEDs.

The Leap Motion. The three reddish lights are the infrared LEDs.

Overview of the Technology

The basic idea behind the Leap Motion is a very well implemented version of stereo vision. Stereovision sensors use two cameras much like two eyes. By determining the distance between two objects in the two images (called the disparity) and the distance and orientation between the two cameras, one can determine how far an object is. It is actually very difficult to do robustly and quickly so this type of depth sensor has not gained as much attention until recently. Leap must have made a few breakthroughs in this area to get it to be so fast and accurate.

The big difference between a standard stereovision set up and the Leap Motion is the use of Infrared cameras and I assume, an infrared filter. When the device is running you can see three red lights inside the unit which illuminate in infrared up to a short distance away from the unit. This would be beneficial because the background (or at least anything more than a couple feet away) would already be subtracted from the view of the Leap Motion so far less processing needs to be done. It only has to determine depths for your hands or any tools in the display rather than every single point in the image and then have an additional algorithm to segment out the foreground. Beyond this improvement it is tough to say exactly how their algorithm works to be able to run so fast. They are doing a good job of keeping that part under wraps.

Basic Viewer, Speed Measurements, and Gestures:

When I opened up the basic viewer I was getting frame rates around 90 FPS and a latency around 4 ms on a 2010 MacBook Pro with an i7. Processor usage is around 50% on one core. The latency tends to climb a bit as the scene gets busier, but I couldn’t ever get it above 10ms. I don’t really have a good way to test whether that measurement is accurate, but it definitely feels like that is the case. I don’t notice any lag when I am using it. These speeds are achieved using the USB 2.0 connection. With the USB 3.0 connection the frame rate jumps to around 200 FPS.

The current gestures and visualizations built into the Leap Motion are:

  • Swipe (a quick swipe in a certain direction)
  • Making a circular motion
  • Poke and tap (depends on whether it is a quick motion forwards or up and down)
  • A visualization that shows the size of a virtual ball that roughly conforms to how open your hand is
  • A disk representing your hand showing which side of your hand is the top or bottom. It appears it makes this decision based on your fingers. When I held my thumb in my hand rather than outside it sometimes would get confused

Unfortunately the SDK does not currently support point clouds, but Leap Motion is promising developers that this piece of functionality will be available in the future. Instead the way your hand is depicted is much simpler to work with from an application development standpoint. Each finger is represented by a point and a vector as shown in the image below. The point is the tip of your finger while the vector represents the direction your finger is pointing.

Leap Motion Visualizer

The simple visualizer that comes with the Leap Motion SDK

Leap Motion SDK

Getting the SDK the setup was really easy and installing the software went without any issues on either Mac or Windows. My Mac seemed to reject the USB cable they shipped with it, but when I tried a new cable it worked fine. Lately Leap has been releasing updates to developers every couple of weeks so the SDK will likely be dramatically different by the time it ships in July. In the current version, the SDK contains several examples which I will get into later and is available for c++, c#, objective c, Java, Python, and Javascript.

So far I have only used the C++ version of the SDK. The SDK is specified by one header files that is documented very well and only requires one dynamically linked library. Much effort seems to have gone into making sure that the information that is exposed to developers is very clear. The header is about 3000 lines long and the vast majority of that is extremely detailed explanations of every function. After only looking at the header file for a few minutes I had a very good understanding of how to use it. The library works by having your app override several virtual functions that are callbacks for various events such as a new frame, Leap Motion connected, and Leap Motion disconnected. The frame gives you access to the important information such as hands, tools, and gestures. After this point, the way you use that information is up to you.

Finals Thoughts

After using the Leap Motion for a while I am really curious to see what other developers are able to come up with. I am currently working on a team that is building an app for launch for creating music that will be pretty cool, but at the same time, I am having a hard time seeing a “killer app” that will make Leap be an indispensable new form of human computer interaction. Hopefully I will be proven wrong as the Leap Motion is a really cool piece of hardware. Some code examples and demos should be coming soon!