Thursday, February 23, 2012

The Digital Smile

I'm not going to lie. The paper I read was short. So short, in fact, I nearly passed it up until I started thinking about it. Although it was short, it still informed me. It gave me facts and ideas, that i'd only be able to attribute to this paper. For that reason I'm writing about it. The paper was explaining a demo that used the Kinect to map facial gestures to an onscreen avatar. That's right. When you smile, you're character smiles. When you look like you utterly want to destroy the enemy who stole all you're online glory... well, same thing happens... kind of. The paper explains that though their framework, you don't have to manage lighting or put intrusive sensors on the subject. However, because the signal from the Kinect has a lot of "noise" you can't 1-to-1 map your face to your character's face either. To solve this problem they use a technique very similar to normal gesture recognition techniques used with Kinect. They have a pre-loaded database of facial animations, and use a sort of splicing between what the camera sees and those animations. The whole process can be summed up in this way: The Kinect determines which expression your current expression matches, and the activates that expression animation on the character model.

I was actually surprised I didn't think of this before. After all, all of the papers I have read said as much. Pre-load the database with preset gestures, then do machine learning to match the input with the closest preset. Using this same idea, putting in emote detection in our project really isn't that hard. Depending on the availability of time and resources, we could likely add emoticon support within a couple of weeks.

-Kao Pyro of the Azure Flame

Source:
Weise, Thibaut, et al. "Kinect-based facial animation." SA '11 SIGGRAPH Asia 2011 Emerging Technologies (2011).
http://dl.acm.org/citation.cfm?id=2073370.2073371&coll=DL&dl=ACM

No comments:

Post a Comment