Thursday, February 25, 2010

Non-contact Method for Producing Tactile Sensation Using Airborne Ultrasound

Authors:
Takayuki Iwamoto
Mari Tatezono
Hiroyuki Shinoda

Summary:
Goal: to provide high fidelity tactile feedback for interaction with ultrasound, acoustic radiation pressure.
An array of 91 transducers were used to form a tactile feedback surface. The sensors were activated to find the radius of the focus point , force/pressure exerted at a distance by the tactile device on the hand. The evaluation of the system was done based on comparison of the theoretical values calculated with the practical measurements. The user study was not systematic and does not have quantitative data. The user felt the tactile sensation when the feedback was in the form of vibration rather than constant pressure.

Discussion:
This mode of feedback enriches user experience. This device could change the mode of interaction for blind people. The interaction medium for blind people is predominantly voice based. The visual display can be replaced by this device to provide a better interaction for the blind.


COMPUTER VISION-BASED GESTURE RECOGNITION FOR AN AUGMENTED REALITY INTERFACE

Authors:
Moritz Störring
Thomas B. Moeslund
Yong Liu
Erik Granum

Summary:
A paper proposes a computer vision based algorithm for recognizing gestures in an augmented reality system. The hand gestures (counting 1 -5 ) is captured with Head Mounted Camera (HMC). The problem comes down to identifying each finger and finding the number of fingers in an image. The image of hand is transformed to polar transformation.
User study; Qualitative data from the user study showed the system was robust and users adapted to the system easily

Discussion:
The target of the paper was to provide a gesture recognition system but the accuracy of the system has not been reported. Not providing quantitative data raises questions about the algorithm. Why is the robustness of the AR interface analyzed in the user study?

Comments: Drew , Franck

Tuesday, February 16, 2010

FreeDrawer: a free-form sketching system on the responsive workbench

Author:
Gerold Wesche
Hans-Peter Seidel

Summary:
Goal - To provide a simple 3D computer aided tool for the direct transfer of design intent into a corresponding computer respresentation.

Guidelines for immersive control - hide the mathematical complexity of object representations, direct and real time interaction, full scale modeling, large working volume and intuitive, easy-to-learn.

Related work - "3 - draw" uses HMD, tablet and 2 hand interaction. The system keeps track of the 2 hands with one hand controlling the tablet and the other drawing in the tablet. and some 2D - 3D drawing tools.

Features - Drawing, creating curve network, new curve in the network, filling the surface , Curve smoothening, Curve dragging, surface sculpting. UI - the design tools spread as virtual pointers at the end of pen. This allows smooth selection than grabbing.
Forceful feedback might help in improving the system in giving feedback about constraints.

Discussion:
An user study to understand the user experience would have helped. The tool selection widget is interesting and different. By including the force feedback, there might be some advantages but it must not be invasive and hinder in the activities of the user. The kind of implementation of the force feedback would make a big difference in user experience.

Comments: Franck

Wednesday, February 3, 2010

Lab day 1

Eye Tracker:
It was a tiring experience. It took fair amount of time to realize that only one of my eye was being tracked. I went through the calibration procedure twice. I did not understand how it worked though. I tried doing some simple task with the device. One task was to use it as a mouse. I tried navigating around the screen and clicking on an icon. I could not navigate as i intended to. I could not select any icon on the screen. It was difficult to avoid jitter and concentrate on the same area.
I found difficulty in holding the pointer to one area. It was interesting to see the recognition of the eye from the software and i missed noticing lot of parameters it calculated.
i think the device can be used to the context or roughly the area the user is gazing at.

Head mounted display:
The device reminded me of the night vision goggles that i often see on movies. It was cool . It had displays in front of the eyes and a camera in front to capture the images in front. There was control box which controlled the display input. It can be used to relay the video captured by camera or can be enhanced I did not find much of difficulty in using it. Josh conducted an user study with the device. The tasks we were asked to do were interesting. Arranging books sitting and standing, writing, reading and walking around the room. Well the task helped me adjust to the difference in the depth perceived through goggles. May be with greater usage the user might get adopted to this difference.

Motion Editing with Data Glove

Authors:
Wai-Chun Lam
Feng Zou
Taku Komura

Summary:
Goal - Mapping hand motion to the whole body to edit human motion. Controlling human figures in real time environment such as games and virtual reality system.

Previous work in motion puppetry were focussed on facial expressions. Using mouse and keyboard controls for motion editing is difficult since the lesser degree of freedom that the devices offer.

A mapping between the hand and the body. The effectiveness of the tool is based on the mapping function. There are 2 stages in the tool - Capture stage and reproduction stage.
To match the trajectories and reduce the noise in the data, a fourier series expansion is applied to data. An ordinary walking motion was captured using this device and was used to make hopping motion with large strides and a running motion along zigzag path.

Limitation: The system works effectively if the original motion captured and the newly generated motion are related. Unrelated motion like jumping with 2 legs while original is walking does not work. Foot on ground cannot be captured exactly.

Discussion:
I would say this tool is better than the mouse + keyboard but this would still have constraints. And the animation shown here is fairly simple (no human features). There is no user study or proof that this tool works in a complex animation. Also mapping finger motion to walking is natural but how does mapping other parts of the body to the hand help.

Comments: Sashi, Drew

EyePoint: Practical Pointing and Selection Using Gaze and Keyboard

Authors:
Manu Kumar
Andreas Paepcke
Terry Winograd

Summary:
Gaze technology problems -
accuracy, eye jitter detection, distinguishing between intentional dwelling and movements to involuntary search/ scan movements ( midas touch problem).

prior work - gaze based hot spots, gaze based context awareness, dwell time activation,...

current solution for pointing and selection - look, press, look, release
- look at the point of interest for pointing to rough area
- press a hot key (click, right click, double click )
- magnified image of the rough area appears
- with the hot key pressed down, look into the magnified area to select the particular area of interest
- release the hot key to make the selection
- to escape the process, press esc or stare away from the magnified area and release the hot key

User Study:
From the pilot study, the gaze data was found to be noisy. It was difficult to distinguish whether the user was looking at the target as a whole or focusing at a point. Focus points were included in the magnified view to improve the accuracy of the system. A gaze marker was included as a visual feedback for the user. But it proved to be a distraction in the pilot study.

20 users who were not used to eye tracker technology were involved. Three variations of the study are Eyepoint with and without focus grids and Eyepoint with gaze marker. Both qualitative and quantitative data was collected. Tasks in the user study were a real world browsing, a synthetic pointing and a mixed typing and pointing task. The users were first calibrated on the eye tracker and had 5-10 minutes training.

Web study - A single link was enabled on a page and all other links were disabled. The users were instructed to click the enabled link which was highlighted in orange. The time between the introduction of the page and the link selection was measured. The error rate for eyepoint was larger than that for the mouse. The users felt mouse was accurate but they (3/4 of users) were inclined to using eyepoint. Users felt focus points were useful.

Balloon study - A balloon with different sizes (22,30,40) are placed on the screen and amount of time taken to click is determined. The error rate and time were significantly affected by the size of the balloon. The eyepoint was slower than mouse by 100ms. 3/4 of the users felt inclined to use eyepoint. Subjects also felt fatigued by using mouse.

Mixed study - Mixed typing and pointing task included selecting a constant size balloon shown on the screen and then typing the shown word in the text box. This allowed the researchers to measure the time taken to move the hand from mouse to keyboard and then back to mouse for next selection. Eye point had better performance but lacked accuracy .

The error rate was more in case of eye point than mouse due to various reasons.
- dependence on individual, calibration and posture of the subject
- some subject errors like pressing the hot key before looking at balloon.
- seeing the target using peripheral vision before getting fixated on the fovial vision.
- astigmatism , squint, glasses that reduces the accuracy eye tracker.

Discussion:
I liked the user study. They have given significance to both qualititative and quantitative data. In case of the qualitative data collected, it was reported that the user liked to use eye point. The fatigue i think is not emphasized. I feel though selection by gaze tracking is natural, concentrating on an area is a very tiring work. Interesting to see users have gone through one hour of the user study. I could not use the eye tracker for more than 10 minutes in the lab.

Comments: Drew , Murat