Thursday, March 22, 2012

Kinect making 3D video

I discovered a post that makes a 3D video artificially using the Kinect sensor to record the video. The proposed algorithm is a prepossessing stage. Using raw depth data from the Kinect to have the depth element to the video is the depth map is relative and full of holes. The depth data is recorded based upon reflected infrared light coming originally from the sensor. To help compensate, the article proposed using the RGB frames to help clear the depth data up. The proposed algorithm has five steps in order to created an accurate depth map for the 3d video. The first step creates a series of motion estimations using both the frames before the current frame and estimates the motion vectors of the frames after the current frame. The second step is to create a confidence metric for the motion vectors of the future frames in order to assess the quality of the motion vectors. The third step is to use the motion vectors on future frames for "motion compensation" in order to have a better accuracy of the depth of the frames. The forth step is to perform basic depth map filtering. The final step is to fill any holes with the data of neighboring pixels.

The results of this algorithm is a video conversion at 1.4 frames per second. Keep in mind this is not the viewing rate but the processing rate. The algorithm fixes problems with the original depth map. It also make the depth map smoother and more stable.

- Kao Pyro of the Azure Flame

Source:
Matyunin, S., Vatolin, D., & Berdnikov, Y. (2011). TEMPORAL FILTERING FOR DEPTH MAPS GENERATED BY KINECT DEPTH CAMERA. 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) , 1-4.
http://ieeexplore.ieee.org/search/srchabstract.jsp?tp=&arnumber=5877202&openedRefinements%3D*%26filter%3DAND%28NOT%284283010803%29%29%26searchField%3DSearch+All%26queryText%3DKinect

No comments:

Post a Comment