Monday, March 1, 2010

Office Activity Recognition using Hand Posture Cues

Authors:
Brandon Paulson
Tracy Hammond

Summary:
Goal: Activity recognition based on hand postures. hand posture to determine object interaction and user dependency in interaction style.
Activity recognition can help establish context of the user interaction. Activity theory - activities have objectives and are accomplished using tools and objects. Therefore, by identifying the object that the user is interacting with information about activity can be extracted.
Previous work: recognizing movement related activities - vision based, wearable accelerometers
object interaction - RFID tags on objects with tag reader in hand.
Grasp types - vision based and glove input data
Implementation: CyberGlove 2 with 22 sensors is used in the system. 1NN classifier is used to classify between 12 different activities. An user study with 8 users was conducted. User independent testing produced a very low accuracy in activity recognition (average - 62%). An user dependent testing with 2 training and 3 testing samples produced 78 % accuracy while 4 training sample produced 94% accuracy. User independent gestures produced lot of variations. User dependent gestures were better recognized but confusion occured in typing on keyboard & phone and circular grip on objects like mug, drawer, telephone and stapler.

Discussion:
An interesting method to identify activity. It was not clear if each activity was captured separately or if it was performed in a sequence. it would be interesting to see the scalability of the system to various other activities. I would also like to see an example application where the context information is used. I am a little confused on how this data can be used.

Comments: Paul
Drew

3 comments:

  1. Yes, I was also wondering how this data can be used. They mentioned in the paper about incorporating this recognition into a continuous workflow. They also talked about the segmentation problem. We can ask Dr. Hammond in class about these things.

    ReplyDelete
  2. I got the impression that each activity was acquired separately where it could be repeated several times. How, and for what the data would be used I too am puzzled.

    ReplyDelete
  3. I think the gestures were caught separately. I think this would be more helpful if they weren't using a glove to track the movement. If they want to spy on their employees and see what they were doing, asking them to wear a glove would raise suspicion.

    ReplyDelete