Thursday, May 6, 2010

Real-time robust body part tracking for augmented reality interface

Authors:
Jinki Jung
Kyusung Cho
Hyun S. Yang
KAIST - interaction lab in korea

Summary:

Goal: to model the 3D body from the 2D detected body parts. a step towards more intuitive interaction in AR environment. (project NATAL)
- system deletes the background to get one single body blob.
- uses skin texture to detect face and 2 hands.
- uses lower body contour to detect leg and feet.
- users particle tracking to track the head through the frames.

The 3D pose is then estimated from the body parts recognized. An adaptive color detection is used to provide clothing and illumination independent detection of body parts. The 3D location is calculated based on the center point camera and its distance to head , and 2 feet.

Discussion:
There is one more person in MIT media lab with strange name doing the same kind of research. The skin texture detection independent of clothing color is cool.

Comments: Franck, Murat

That one there! Pointing to establish device identity

Authors:
Colin Swindells
Kori M. Inkpen
John C. Dill
Melanie Tory


Summary:
Goal: Facilitate Human - Computer Identification with a pointing device.

As the number of devices increase, users need to go through the cumbersome process of remembering the wireless settings and other parameters to connect to devices in the environment. The scenario can also applied to when the user wants to share documents with other users around in a wireless network. This paper provides a device which works on micro-controller with IR receivers and transmitters that can used to point at some device and connect to it easily.
A user study was conducted with 4 users each for 2 phases. First phases required users to select mobile devices and the second included selection of item from the given list. A pre and post questionnaire data were collected. The results were then compared to the performance of users with graphical list from which the users can select a device of choice.

Discussion:
The user study and the data analysis were pretty thorough. There does not seem to much preference for the device over the graphical list (UI).

Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments

Summary:
Goal: environment lighting independent face detection from the webcam video stream.

The red and blue subspace is used to reduce the illumination noise. Since the skin texture for fac eis dependent on the illumination condition. The illumination condition is identified first and then face texture pattern is modeled using 10 examples for each illumination condition. Motion detection technique is used to eliminate color region similar to face in the background.
Iris gives out luminance and its detected using the sharp change in the Y component.

Discussion:
Statistics on accuracy is not given. The KNN classifier would be drastically slow in case of more examples. Each illumination setting requires a set of examples to be provided. How do we enumerate all the illumination setting ? I do not think the recognition algorithm would scale to settings that has no examples.

Comment: franck, Murat

Wednesday, May 5, 2010

XWand: UI for intelligent spaces

Authors:
Andrew Wilson
Stephen Shafer
Microsoft Research


Summary:

Goal: to build an interaction device which can be used to point and interact with multiple devices around the user. (including voice medium).
- Orientation - combination of magnetometer and accelerometer. Can be affected by the metal in the environment
- Position - uses vision techniques to find the 3D position from 2 - 2D position. Tracks 2 IR LEDs.
An average error of 6' is found in pointing tasks (pointing accuracy of device).

Dynamic bayes network is used to process the events from the wand (gestures / button events) and the speed recognition events. Speech recognition allows multimodal interaction with devices in the user environment and also provides multiple methods to perform one operation. Ignoring speech recognition results based on the pointing context helps improve the speech recognition. "Volume up" is ignored while pointing at the lights.

A user study with 10 male users was performed. Variables - Time taken to complete task, accuracy of pointing and responses to questionnaire. Scenarios - Tracking & No tracking , Audio Feedback & No Audio feedback were tested.
The users did not find the audio feedback very useful while tracking of the wand is enabled.

Discussion:
The wand is familiar to the wand simulated with Wii for Wiizards games and i do not remember another such want pointing device (mounted on mouse). Given the usage scenario, the device is novel.

Comments: franck