Wednesday, April 28, 2010

Online, Interactive Learning of Gestures for Human/Robot Interfaces

Authors:
Christopher Lee
Yangsheng Xu

Summary:
Goal: Provde better interactivity and control over robots (effectiveness of tele-operation).

Cyberglove is used record hand gesture input from the users. Hidden Markov model is used to recognize the gestures.
- Users perform a series of gesture
- HMM train on the gesture if they classify them into one of the existing gestures and perform the related action.
- if HMM is not able to classify,
- asks the user to disambiguate and train.
- new gesture added by the user.

Discussion:

the preprocessing procedure is very interesting. It is applicable to PowerGlove++ project by Drew & co.

Comments: Franck

RealTime HandTracking as a User Input Device

Author:
Robert Y Wang

Summary:
Goal: Easy to use and inexpensive system for user input using hands.
Design: a glove with color patterns . optimal color pattern and pose recognition algorithms have been explained in this paper.

A nearest neighbor approach is used to recognize a hand pose. every pose gives different image. So the image lookup approach is being used. The query image is normalized and down-sampled before nearest neighbor is looked up.
As the database size increased the RMS error to the nearest neighbor image decreased. To increase the retrieval rate the author compressed the image into a 128 bit binary sequence and used hamming distance to compare images.

future work:
The experiments to determine the optimal color pattern. Three dimensions of change - color, spatial frequency of blobs and icons / shapes used.

Discussion:
It is a extremely cheap solution. I would like to read about the studies on the pattern color/ spatial frequency studies to know its significance.

I think the separability / difference between the different poses is a catch. The system works well for poses for which the images are different but i do not know if that would restrict the number of the poses.

Wednesday, April 21, 2010

Liquids, smoke, and soap bubbles: reflections on materials for ephemeral user interfaces

Authors:
Axel Sylvester
Tanja Dring
Albrecht Schmidt

Summary:
Goal: Tangible ephemeral interface design with soap, bubble and smoke.
Setup - a round table - 20 inch in diameter, bubbles can be blown on the surface which stays on it for minutes together. The movement of the bubbles can be tracked with the camera beneath the table. The user moves the bubbles using either moist hand or slightly blowing the bubble. In a first application bubbles are used to influence the brightness and hue of the
surrounding room light. The room illumination gets brighter the bigger a recognized bubble is. Hue is set according to the position of a recognized bubble using the x and y axis to
bring up blue and red tones

Discussion:
Interesting mode of interaction. I could see applications in fun filled games for children.

Recent Developments and Applications of Haptic Devices

Authors:

Summary:

The paper talks about the present day technologies available for haptic feedback. Factors to be considered while designing a haptic device
* weight
* frictional feedback in virtual world
* Parallel vs serial mechanism - serial mechanism offer less stiffness.
* ability of the device to handle pressure exerted by the user.
* hydraulic actuators - large forces but risk of hydraulic fluid leak.

The paper discusses about a wide range of devices starting from mice to force feedback skeleton arm and their costs , applications and working

Discussion:

I dint know the abundance of such technologies. This paper introduces to lot of research being done in this field expecially research in neuromuscular stimulation.

Saturday, April 10, 2010

Hand and Brain - Neurophysiology and psychology of hand movements

Summary:

Optic Ataxia. pg 18/19
- one of the types is caused by parietal damage
- deficit in spacial vision - difficult to localize an object in visual space.
- gaze, misreaching, size of grasp.
- can identify objects but cannot pick up

Visual form agnosia
- ability to reach objects but not perceive them visually.

- There are different pathways for transforming visual information in to actions and perception.

Page 22 - Experiments to analyze the precision grasp mechanism. Objects chosen that would require contour analysis.

Thursday, April 8, 2010

Coming to grips with the objects we grasp: detecting interactions with efficient wrist-worn sensors

Authors:
Eugen Berlin
Jun Liu
Kristof van Laerhoven
Bernt Schiele

Summary:
contributions of this paper are threefold: First, we mention the technical procedure in optimizing a wrist-worn RFID antenna. Second, a benchmark is presented that allows evaluation of different antenna configurations. Third, an approximation algorithm is demonstrated which makes recognition of short gestures possible.
- Experiments were conducted for finding the best performance of RFID reader for oval and circular antenna. Oval antenna performed better.
- A sliding window technique is implemented to find the most probable window in which the gesture was performed.
- a series of studies were performed to evaluate the instrument. First, reading an object from a cluster of objects kept inside a box. A study with user performing different gardening tasks using 16 objects and 36 tags was performed. A long study was performed with one user performing daily activities with 29 objects and 43 tags. The user was observed for 3 consecutive days.

Discussion:
Sliding window is an interesting method to extract the most probable gesture from the continuous stream. Evaluation study was quite extensive.

Tuesday, April 6, 2010

The Wiimote with multiple sensor bars: creating an affordable, virtual reality controller

Authors:
Torben Sko
Henry Gardner

Summary:
Goal: Using Wiimote, nunchuk and sensor bars for interaction in multiple screen environment.

The IR camera in Wii mote can detect upto 4 IR LED source. So it can detect 2 sensor bars at a time. A software which process the data does some intelligent prediction on which bars are being detected. The dynamic model is built from the IR data received from the Wiimote. The static model is read from a configuration file and describes the mapping of IR sources to pixel positions on the displays behind them. configuration needs to comply with the following conditions: at least two IR sources should visible at most times (justifying why each sensor bar has two IR sources), two sensor bars are visible when moving from one bar to another, and no more than 4 IR sources are visible at any one time. A sensor bar / environment switching algorithm has been defined by the author in the paper.

First user testing : 12 participants , 1 female , 8 with prior experience in using Wiimote - “Involvement/Control” and “Interface Quality” were rated at 5.7 & 5.3. Physical presence sensor bars were noted as distracting.

Second user testing: 8 participants, 1 female, 6 with prior experience in using Wii mote and everyone had gaming experience. The users had to play half life 2 (FPS). Everybody stated that they enjoyed the interface. Most of them expressed frustration with the control system. Some participants reported physical discomfort due rate of rotation in the screen.

Third user testing: 4 users from second test. refined was done in seating position, slower movement rate, vertical shooting and reset rate. Positive feedbacks on the usability of the system

Discussion:
One of the few research papers where follow up studies have been conducted. The authors have conducted a good user study. I would like to know the orientation of the sensor bars that helped them restricting the number of sensors on the screen to 4 always. The dynamic modelling system would also be interesting to know.

The peppermill: a human-powered user interface device

Authors:
Nicolas Villar
Steve Hodges

Summary:
GOAL: the use of a simple circuit to enable interaction-powered devices that support rotary input as the primary interaction technique; a design for a generic wireless interface device based on this circuit. gestural interaction devices like Wiimote enable users to perform large set of gestures which have potential energy. There are lot of oppurtunities to harness this energy to use it as power source for the device.

When the user turns the knob, the microcontroller powers up and samples the inputs from the supply circuit, as well as the state of the three additional buttons. It encodes and transmits the speed of turn, direction of turn and state of the buttons (pressed/released) as a single wireless packet. As long as enough power is being generated the microcontroller continually samples and transmits packets at 5ms intervals. Some interesting extensions include providing force feedback to users by shorting the DC motor. This stops the motor and does not allow the user to turn the knob. By periodically and momentarily braking rotation in this manner, it becomes possible to dynamically generate a variety of interesting haptic effects without supplying additional power.

Discussion: Interesting field of research. Tapping potential energy from the gesture movements is awesome. The haptic feedback entensions provided by the system is interesting.

Comments: