Tuesday, February 7, 2012

The Universal Touch

That's right. It's another post about Kinect research. Today's topic? Turning any monitor into a touch screen. Because the Kinect has the ability to determine Z-axis locations, instead of just X and Y, it theoretically has the ability to determine when a intersection with another object has been made. Apply this to any monitor and you suddenly have a make-shift touch screen. However, the problem of only one camera combined with the camera's low resolution still creates many challenges for an accurate implementation. Other considerations are identifying the screen from the rest of the environment, as well as obtaining accurate touch location detection with the finger itself.

To start off, the researchers decided to filter out all unnecessary information. That means that they have the camera record the depths of all objects, then remove everything that is behind the screen. This way if there are any sudden changes to the background, they won't be registered by the system, but rather ignored. During this phase though, reflective monitors can give an inaccurate depth reading, so it is suggested that you cover the monitor with a non-reflective material such as a piece of paper. After all the filtering of non-essential objects occurs, there is a calibration phase with the users fingers, to ensure there is no preexisting touch offset. When that is finished, touch is simply determining when the depth of the finger equals the depth of the screen.

Unfortunately this system has many limitations. To get an accurate reading, the finger must be parallel to the vertical edge of the monitor. Also the finger can't be at too much of a Z/Y axis angle, otherwise the hand will begin to cover up the finger in front of the camera. These are going to be the two main issues for any motion tracking system that only uses one camera.

The things I took out of this to find useful for my project would be the filtering technique. Rather than trying to work around a changing background environment, it'd be better to just have the system ignore anything at a certain depth. That way there is less information clutter that has to be sorted through, creating a clearer picture of the motions attempting to be read.

- Kao Pyro of the Azure Flame

Source:

Dippon, A., & Klinker, G. (2011). KinectTouch: accuracy test for a very low-cost 2.5D multitouch tracking system. ITS '11 Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces , 49-52.
http://dl.acm.org/citation.cfm?id=2076354.2076363&coll=DL&dl=ACM

No comments:

Post a Comment