For roughly the last year and half I have been working at Tarsier, Inc (www.TarsierInc.com) developing a new technology called MoveEye. MoveEye uses sensors built into 3D glasses to track gestures from the user’s perspective. This is a unique approach that I think will play an important role in shaping future human-computer interaction.
Currently most major gesture recognition technologies revolve around a sensor that has a fixed field of view. For example, the Kinect or the Leap Motion are usually set up to observe a certain area and watch for various gestures within this space. This allows for powerful interaction, but it is missing a key piece of information that greatly increases the expressive power of the gestures: point of view.
When the user’s perspective is known it can be determined exactly what the user is trying to interact with no matter where they are. One only has look at the screen they wish to interact with. Relative gestures (similar to the Kinect or Leap Motion paradigm) can be still be used while non-relative gestures can be used to interact directly with objects on the screen. This key difference allows anyone to interact with a screen in an intuitive way as long as they can see the screen. In my experience I would describe the interaction as being similar to using a touch pad.
This new paradigm has proven difficult to explain without seeing it in action. To help resolve this issue we put together a demo video showcasing MoveEye. This video is really only a small taste of what we know is possible using this paradigm, but I think it does a good job of showing the foundation.