Wednesday, January 27, 2010

HoloSketch: a virtual reality sketching/animation tool

Authors:
Michael F. Deering


Summary:
Goal: Building an application that applies the Virtual reality technologies in simple 3D drawing and animation tool

Previous technologies: 2D mouse has 2DOF in a interaction space which requires 6DOF movement. HMD, during the time when the system was made, had lower visual resolution. Novelty in the Holosketch is the introduction of 6DOF of hand input device. Holosketch uses a 20 inch CRT display, head tracking device (field sequential shutter glasses) and wand 6DOF virtual world manipulation tool.

Calibration: high accuracy position and orientation tracker for tracking head movements. Corrections are made for the distortions due to the curvature of the CRT as well as the index of refraction. The individuals interocular distance is used as calibration parameter, since based on the parameter the system corrects for changes due to rotation of the viewers eyes. Holosketch interaction techniques rely upon this high accuracy.
Menu design is based on the 3D consequences of Fitt's law, screen real estate cost and the position of the menu ( not interfering with the displayed object). The fade up pie menu is used to display the set of menus. The workspace fades out in the background. Right button of the wand is clicked and pressed to pop up the pie menu and released at the desired menu - to select the menu.
Drawing Features - Features include selecting and drawing primitive 3D shapes, drawing lines by pressing the left button of the wand and sweeping on the 3D space (leaves a toothpaste line with different thickness), 3D text typed using keyboard and imported geometric shapes. Drawing attributes if the shape can be changed. The most important of which is the Color. The application provides a RGB color cube to select the color from. The tip of the wand changes to a sphere showing the current color selection. As the wand moves through the cube, the color of the sphere changes. If the wand moves out of the color cube, the cube disappears.

Selection - Last object is the default selected object. Continuous color change/ blinking is used to show the selected object. Middle button click is used to select an object. Shift key + Middle button click is used to select multiple objects.

Editing - After an object is selected, the object can be moved, scaled, grouped and various other attributes (like point size of text) can be set. Squeeze buttons on the side are used to control the movement of the object. To prevent accidental trigger, key combinations are used. Control key for positional change, Ctrl + Shift for orientation change and Shift key for Position + Orientation change. Property sheets (Attributes menu item) of the object can be used to change the parameters.

Animation operations - Operations like Rotation, Solid movie looping , Color oscillation, Scaler, shifter and flight path can be done using the application.

Environmental variables like light source can also be controlled in the application. The viewpoint for the user can be changed by moving the head. To examine particular object, the user can select the object and rotate it with the wand. In the control mode (select from fade up menu), the entire virtual universe can be considered as an object. Depressing the left wand button changes the scale of the universe. The user can mark the particular user settings with a command and get back to the settings with another command.

User Study:
Single user, a traditional artist, used the system for a month. Holding one's hand in the air for long periods of time was not hard but making fine adjustments became difficult. A new mode was added - ability to draw 2 handed. Use 2d mouse as the lever and vary dynamically the radius of the toothpaste and simultaneously drawing with 3d wand.

Limitations - Geometry modelling and complex physics simulation to give more complexity to the animation and the virtual world. Optimal placement of menu buttons, choice of button colors has been secondary.

Discussion:
The user study analyzed the experience of a single user over the period of one month. It gives extensive and detailed data about the usability of the interface and the application. But the experience of a single user can not be generalized to other users. The user study was conducted on a computer/ traditional artist rather than a novice. The author claims the system will be easy to use for a novice.
The choice of a big 3D pie menu was confusing. The idea behind careful choice of menu interface was to not interfere with displaying the object being edited and occupying less real estate. The pie is flooded with all the menus and takes up the whole screen.
The interaction technique introduced by the system seems to be interesting.
Java 3D and Virtual Portal are projects extended from the concepts of Holosketch. Virtual portal as shown in the picture below contains 3 perpendicular wall with rear projection and head tracking display. this was used in automobile simulation. The 3 wall projection to provide the natural 3D interface is better than single flat display. CAVE, a VR system, extended this and had a floor projector to provide more natural 3D interface.



Comment: Paul, Franck, Sashi

TIKL: Development of a Wearable Vibrotactile Feedback Suit for Improved Human Motor Learning

Authors:
Lieberman, J.
Breazeal, C.


Summary:
Tactile feedback presents the most direct form of information. Learning a motor skill requires real time feedback. Unlike the auditory/ visual feedback, tactile feedback directly engages our motor learning system. The goal of this system is to become a low latency, fulltime, highly parallel robotic motor skills teacher that can provide constant motor-system feedback to the student as he or she attempts to learn new motor skills. Sensory saltation is a sequence of spaced and timed pulses. This helps to communicate the joint errors and rotational errors.
Vicon Tracking system used for subject tracking is a set of IR reflectors positioned at fixed points on the suit and a set of high speed IR camera to capture the position and joint angles. The markers’ placement and the calculated joint angles are used to find the five observed joints: wrist flexion/extension, wrist abduction/adduction, forearm rotation, elbow flexion/extension, and upper arm rotation. Calibration procedure is provided to change the joint angles and offsets. Control software compares the expert action with novice to give the tactile feedback. In the current system the error is determined by the joint angles. This needs to be improved to accurately gauge the performance of the user.
User study included 40 subjects divided into 2 groups. First 20 subjects were given only visual feedback and the next 20 subject were given both visual and tactile feedback. User is allowed to get used to the system by the first step. This step involves showing images of right hand positions to the users and asking the user to imitate the position as fast as possible. The vibrotactile feedback is given to the user for the first time as the user moves away from the position shown on the picture. The time taken for the user to attain the position is measured. The second step involves showing videos of actions (simple 1DOF - complex 5DOF). After 20 minutes of the video phase, the users are given a questionaire.
Results:
The tactile group felt the feedback didnot significantly help in improving performance. They made more conscious effort than the other group. The feedback for joint angle was better perceived than that for joint rotation. This is because the rotation feedback is given through saltation. This can also be seen in the results of the repeated trials and evaluating the amount learned.

Discussion:
Auditory and Visual feedback are the natural methods to get feedback. We have learnt to process the audio/ visual feedback while learning. Tactile feedback on the other hand is unnatural. Processing the feedback is a learning process in itself. This is also supported by the user study. The user group have had difficulties understanding the meaning of the tactile feedback.
The future work notes some interesting points - A better metric for comparison of the expert and novice has to be developed. Users should be allowed to correct one error dimension and then move on to the other (discomfort due to continuous tactile feedback).

Comments: Paul, Josh

3DM: a three dimensional modeler using a head-mounted display

Authors:
Jeff Butterworth
Andrew Davidson
Stephen Hench
Marc. T. Olano


Summary:
Goal - build an application which makes 3D modelling simple. There are multiple ways to create a single object (triangles/ triangle strip). The triangle creation tool allows to create new triangles or from existing points or edges. The extrusion tool can be used to draw polyline, extend existing edges and stretch it out the existing surface. Editing tools - mark and move (objects or collection of points), scaling, cut, copy, pasted and deleted and a undo/ redo stack. Ability to reach and grab the shapes makes it easy to work with the tool. The extrusion tool makes the twisting and translating the objects easy.

Discussion:
The user study does not provide neither qualitative nor the quantitative data. The use of mouse affects ease of use in manipulating the 3D objects.

Comments:

Sunday, January 24, 2010

Wearable EOG goggles: eye-based interaction in everyday environments

Authors:
Andreas Bulling
Daniel Roggen
Gerhard Troster

Summary:
The wearable EOG goggles were designed to fulfil thefollowing requirements:
1. To achieve a convenient and unobtrusive implementation and minimise user distraction the device needs to be wearable and lightweight.
2. To allow for autonomous long-term use in daily life the device needs to be low-power.
3. The device needs to provide adaptive real-time signal processing capabilities to allow for context-aware interaction.
4. To compensate for EOG signal artefacts caused by physical activity and changes in ambient light an accelerometer and a light sensor need to be added.
Two components in the EOG - goggles and processing unit. Gesture based interaction include blink detection and removal, saccade detection and eye gesture recognition

Discussion:
Looks like a device specification and a user study to support the specification.

Comment: Josh

Thursday, January 21, 2010

Noise tolerant selection by gaze-controlled pan and zoom in 3D

Authors:
Dan Witzner Hansen
Henrik H. T. Skovsgaard
John Paulin Hansen
Emilie Mollenbach

Summary:
Stargazer - a circular keyboard with a gaze typing process. 3 Concentric circles are used as cursor showing the point of interest and direction of gaze. Auditory feedback is used. Zoom and pan technique is used to navigate through the keyboard. Saccadic movements - fast movements by the user to get the overview of the selectable objects. Zoom - gazing at a particular area makes it possible to disregard the area of least interest. Pan - Horizontal and vertical translation in the same scale of zoom. Allows smooth pursuit movements to seek the object of intent. Dwell time activation - object is selected if the user stares at the object for certain time. The stargazer was tested small, medium and large sized displays. It was evaluated based on speed (words per minute), error rate ( number of backspace / number of characters) and the remaining errors (number of manipulations required to produce a string). WPM were significantly high for large and medium displays than the small display. The remaining errors remained to low. This states that the navigation scheme used was easy to understand. Users could cope with latency upto 200 ms. Users with 5 minutes of training could perform the typing faster.

Discussion:
Clustered objects on the screen slows down the speed of selection. It would be interesting to see if restricting the number of letters on the keyboard based on the dictionary removes clutter and increase the selection speed.
can there by fatigue while gazing? Will it affect the speed of selection ?

Comments:

Wednesday, January 20, 2010

Distant freehand pointing and clicking on very large, high resolution displays

Authors:
Daniel Vogel
Ravin Balakrishnan

Summary:
In a large display, we cannot assume the fixed spatial relationship between the user and the screen. The desirable design characteristics involve accuracy in pointing, pointer acquisition speed, pointing and selection speed, comfortable usage and smooth transition across interaction distances. Clicking tenchniques evaluated - index finger "Airtap" and thumb "in and out" gestures. Airtap technique is similar to the index finger movement in mouse click. Two challenges involved in the airtap are determining the up and down position of the click and the different styles involved in clicking by different users. The absolute position of the fingers cannot be used to determine the start and stop of click events since there start position of the finger is not fixed. So in addition to the position of the index finger, the velocity and the acceleration of the finger is used to detect the click events. A simple calibration was used to distinguish the click from the involuntary finger movements and reduce the effect of different user styles. 5 seconds of finger movement is recorded, the principle direction is calculated and distance moved in 200ms is compared with threshold distances to find the click events. Visual and auditory feedback is used to replace the lost kinesthetic feedback of button click.
Thumbtrigger has the advantage of having the kinesthetic feedback ( a click happens when the thumb touches the side of the palm). But this proved to be difficult for the users.
Pointing techniques - raycasting with absolute pointing, relative pointing with clutching and combination of both. Finger raycasting was used to point to object on screen. The index finger pointing direction was used. Jittery cursor movement is the disadvantage of the method. The jitters are caused due to fatigue. Palm as pointer, kalman filter, dynamic recursive low pass filters are some techniques used to reduce the jitters. The safe hand posture is used to point and clutching gesture is used to select a object in relative pointing technique. In the hybrid technique, the index finger is used to point to an approximate area (absolute pointing, cursor - circle) then the palm is used to move the cursor relatively to the correct position (cursor - cross). Clutching gesture is then used to select objects.
Results:
A transitive task and a sequence task were used in the user study to analyze the pointing techniques. Using Raycasting, small corrective movements to select small objects were difficult. So it consumed more time than the other relative pointing technique. Higher error rates were found in selecting large targets using raycasting. Recalibration time in relative techniques - Users performed short and quick recalibration by clutching independent of the distance to the object. In case of raytorelative, users performed very less recalibration since the recalibration involved large overhead. Longer recalibration time did not affect the overall selection time in raytorelative pointing technique.

Discussion:
Fatigue plays a significant role in this big screen interaction technique. Modeling the systems where such big screen interactions are involved would help in understanding the length of interactions. This would help understand the different actions performed with the interface.
I think the pointing gestures used in RaytoRelative is counter intuitive. Index finger is usually used to point accurate regions and hand wave and clutch to mark approximate regions in big space. The interface seems to have the opposite roles for the gestures.

Comments:

Introduction

2. email : dasarpjonam@gmail.com
3. Academic standing: 2nd year phd Computer Science.
4. From - India
5. I am taking this class to get an idea about the present sight and touch technologies available. To know how research in this field has enhanced CHI.

6. I expect myself to be working in a reputed research institute after 10 years or may be in teaching profession.
7. Next technological advancement - Elimination of mouse.
9. I would like to meet Albert Einstein / Mahathma Gandhi. It would be an opportunity for me to know the motivation behind their achievements.
10. Rope and Batman - Dark knight are my favorite movies.
11. Other areas of interest - philosophy, history, psychology and religion.
I like ants cos they are workaholic.