Projects

The Effect of Tiled Display on Performance in Multi-Screen Immersive Virtual Environments

New immersive display systems are emerging, providing new platforms to present 3D data and virtual worlds. However, little effort has been spent evaluating these systems, or providing guiding design principles from a human factors point of view. The objective of the proposed work is to compare performance and user interaction across two immersive displays. The goal is to compare low-cost multi screen spatially immersive visualization facilitates to more expensive systems. The hypothesis is that low-cost systems present a perceptually equivalent visual experience, despite image seams introduced by the connecting display screens. Psychophysical experimentation will compare the two systems through human judgements based on performance. The outcomes, through knowledge gained from the experiments, should make availability of immersive visualization systems possible in areas currently unable to afford such systems or justify the expense.  This project is currently funded by the National Science Foundation under the Division of Information and Intelligent Systems (IIS) grant HCC-0917232 and under the MRI-0521110




Generating Animal Avatar Animation with Specific Identifiable Traits Based Upon Viewer Perception of Real Animals

Creating believable gait patterns for a wide range of digital characters is a challenge that cannot be met through

the application of current methods of key-frame animation or motion capture. A digital creature’s performance can be thought of as a combination of specifically defined motion and form; a combination that allows the viewer to comprehend the creature’s action and intent. Computer graphics offers a variety of methods for defining motion including key-frame animation, data-driven action, rule-based and physically-based motion. However, all of these methods can be complex and time-consuming to implement. Essentially, most computer animation methods force the animator to think about motion at a low-level of abstraction. To create animation tools that simplify the process of creating expressive motion, we need to allow animators to work at a high-level of abstraction. We need determine the minimal elements of form and motion that visually communicate a maximal amount of information about an actor’s identity or intentions.  This project is currently funded by the National Science Foundation under the Division of Information and Intelligent Systems (IIS) grant HCC-1016795.




Subtle Gaze Direction

This project is focussed on a novel technique that combines eye-tracking with subtle image-space modulation to direct a viewer’s gaze about a digital image. We call this paradigm subtle gaze direction. Subtle gaze direction exploits the fact that our peripheral vision has very poor acuity compared to our foveal vision. By presenting brief, subtle modulations to the peripheral regions of the field of view, the technique presented here draws the viewer’s foveal vision to the modulated region. Additionally, by monitoring saccadic velocity and exploiting the visual phenomenon of saccadic masking, modulation is automatically terminated before the viewer’s foveal vision enters the modulated region. Hence, the viewer is never actually allowed to scrutinize the stimuli that attracted her gaze. e goal of SGD is to direct a viewer’s gaze to certain regions of a scene without introducing noticeable changes in the image. Using a simple searching task we compared performance using no modulation, using subtle modulation and using obvious modulation. We can also show improved performance in a search task when using subtle gaze direction, without affecting the user’s perception of the image. We then extended the experiment to evaluate performance with the presence of distractors. The distractors took the form of extra modulations which do not correspond to a target in the image. Experimentation shows, that, even in the presence of distractors, more accurate results are returned on a simple search task using SGD, as compared to results returned when no modulation at all is used. Results establish the potential of the method for a wide range of applications including gaming, perceptually based rendering, navigation in virtual environments and medical search tasks.