Wednesday, March 31, 2010

Device agnostic 3D gesture recognition using hidden Markov models

Authors:
Anthony Whitehead
Kaitlyn Fox

Summary:
Goal: test the accuracy of the 3D gestures by dividing the 3D space into subspaces.

The 3D space is divided into 27, 64 and 127 cubes. The data collected is then used to train HMM and tested on a gesture set. The number of training examples needed, recognition speed are measured. 27 state HMM acheived 800 recognition per second while a single recognition took seconds for 125 state HMM. The number of training data needed to train 25 state HMM was lesser. More states in the HMM meant more training data to train.

Discussion:
I am confused between the states and the segmentation of the space. By states do they mean the actual number of states in the HMM or the number of segmentations. The interesting part though is dividing the 3d space into cubes.

Comments: Franck,

2 comments:

  1. This paper was difficult to understand. Maybe when I learn more about HMMs.

    ReplyDelete
  2. I too found the HMM stuff a bit too heavy going but I liked the idea of having a input sensor independent system.

    ReplyDelete