759 research outputs found

    EyeRIS User's Manual

    Full text link

    Assisted Viewpoint Interaction for 3D Visualization

    Get PDF
    Many three-dimensional visualizations are characterized by the use of a mobile viewpoint that offers multiple perspectives on a set of visual information. To effectively control the viewpoint, the viewer must simultaneously manage the cognitive tasks of understanding the layout of the environment, and knowing where to look to find relevant information, along with mastering the physical interaction required to position the viewpoint in meaningful locations. Numerous systems attempt to address these problems by catering to two extremes: simplified controls or direct presentation. This research attempts to promote hybrid interfaces that offer a supportive, yet unscripted exploration of a virtual environment.Attentive navigation is a specific technique designed to actively redirect viewers' attention while accommodating their independence. User-evaluation shows that this technique effectively facilitates several visualization tasks including landmark recognition, survey knowledge acquisition, and search sensitivity. Unfortunately, it also proves to be excessively intrusive, leading viewers to occasionally struggle for control of the viewpoint. Additional design iterations suggest that formalized coordination protocols between the viewer and the automation can mute the shortcomings and enhance the effectiveness of the initial attentive navigation design.The implications of this research generalize to inform the broader requirements for Human-Automation interaction through the visual channel. Potential applications span a number of fields, including visual representations of abstract information, 3D modeling, virtual environments, and teleoperation experiences

    Intelligent cameral control for graphical environments

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1994.Includes bibliographical references (leaves 194-207).by Steven Mark Drucker.Ph.D

    Eye Tracking: A Perceptual Interface for Content Based Image Retrieval

    Get PDF
    In this thesis visual search experiments are devised to explore the feasibility of an eye gaze driven search mechanism. The thesis first explores gaze behaviour on images possessing different levels of saliency. Eye behaviour was predominantly attracted by salient locations, but appears to also require frequent reference to non-salient background regions which indicated that information from scan paths might prove useful for image search. The thesis then specifically investigates the benefits of eye tracking as an image retrieval interface in terms of speed relative to selection by mouse, and in terms of the efficiency of eye tracking mechanisms in the task of retrieving target images. Results are analysed using ANOVA and significant findings are discussed. Results show that eye selection was faster than a computer mouse and experience gained during visual tasks carried out using a mouse would benefit users if they were subsequently transferred to an eye tracking system. Results on the image retrieval experiments show that users are able to navigate to a target image within a database confirming the feasibility of an eye gaze driven search mechanism. Additional histogram analysis of the fixations, saccades and pupil diameters in the human eye movement data revealed a new method of extracting intentions from gaze behaviour for image search, of which the user was not aware and promises even quicker search performances. The research has two implications for Content Based Image Retrieval: (i) improvements in query formulation for visual search and (ii) new methods for visual search using attentional weighting. Futhermore it was demonstrated that users are able to find target images at sufficient speeds indicating that pre-attentive activity is playing a role in visual search. A current review of eye tracking technology, current applications, visual perception research, and models of visual attention is discussed. A review of the potential of the technology for commercial exploitation is also presented

    Embodied interaction with visualization and spatial navigation in time-sensitive scenarios

    Get PDF
    Paraphrasing the theory of embodied cognition, all aspects of our cognition are determined primarily by the contextual information and the means of physical interaction with data and information. In hybrid human-machine systems involving complex decision making, continuously maintaining a high level of attention while employing a deep understanding concerning the task performed as well as its context are essential. Utilizing embodied interaction to interact with machines has the potential to promote thinking and learning according to the theory of embodied cognition proposed by Lakoff. Additionally, the hybrid human-machine system utilizing natural and intuitive communication channels (e.g., gestures, speech, and body stances) should afford an array of cognitive benefits outstripping the more static forms of interaction (e.g., computer keyboard). This research proposes such a computational framework based on a Bayesian approach; this framework infers operator\u27s focus of attention based on the physical expressions of the operators. Specifically, this work aims to assess the effect of embodied interaction on attention during the solution of complex, time-sensitive, spatial navigational problems. Toward the goal of assessing the level of operator\u27s attention, we present a method linking the operator\u27s interaction utility, inference, and reasoning. The level of attention was inferred through networks coined Bayesian Attentional Networks (BANs). BANs are structures describing cause-effect relationships between operator\u27s attention, physical actions and decision-making. The proposed framework also generated a representative BAN, called the Consensus (Majority) Model (CMM); the CMM consists of an iteratively derived and agreed graph among candidate BANs obtained by experts and by the automatic learning process. Finally, the best combinations of interaction modalities and feedback were determined by the use of particular utility functions. This methodology was applied to a spatial navigational scenario; wherein, the operators interacted with dynamic images through a series of decision making processes. Real-world experiments were conducted to assess the framework\u27s ability to infer the operator\u27s levels of attention. Users were instructed to complete a series of spatial-navigational tasks using an assigned pairing of an interaction modality out of five categories (vision-based gesture, glove-based gesture, speech, feet, or body balance) and a feedback modality out of two (visual-based or auditory-based). Experimental results have confirmed that physical expressions are a determining factor in the quality of the solutions in a spatial navigational problem. Moreover, it was found that the combination of foot gestures with visual feedback resulted in the best task performance (p\u3c .001). Results have also shown that embodied interaction-based multimodal interface decreased execution errors that occurred in the cyber-physical scenarios (p \u3c .001). Therefore we conclude that appropriate use of interaction and feedback modalities allows the operators maintain their focus of attention, reduce errors, and enhance task performance in solving the decision making problems

    A multidisciplinary research approach for experimental applications in road-driver interaction analysis

    Get PDF
    This doctoral dissertation represents a cluster of the research activities conducted at the DICAM Department of the University of Bologna during a three years Ph.D. course. In relation to the broader research topic of “road safety”, the presented research focuses on the investigation of the interaction between the road and the drivers according to human factor principles and supported by the following strategies: 1) The multidisciplinary structure of the research team covering the following academic disciplines: Civil Engineering, Psychology, Neuroscience and Computer Science Engineering. 2) The development of several experimental real driving tests aimed to provide investigators with knowledge and insights on the relation between the driver and the surrounding road environment by focusing on the behaviour of drivers. 3) The use of innovative technologies for the experimental studies, capable to collect data of the vehicle and on the user: a GPS data recorder, for recording the kinematic parameters of the vehicle; an eye tracking device, for monitoring the drivers’ visual behaviour; a neural helmet, for the detection of drivers’ cerebral activity (electroencephalography, EEG). 4) The use of mathematical-computational methodologies (deep learning) for data analyses from experimental studies. The outcomes of this work consist of new knowledge on the casualties between drivers’ behaviour and road environment to be considered for infrastructure design. In particular, the ground-breaking results are represented by: - the reliability and effectiveness of the methodology based on human EEG signals to objectively measure driver’s mental workload with respect to different road factors; - the successful approach for extracting latent features from multidimensional driving behaviour data using a deep learning technique, obtaining driving colour maps which represent an immediate visualization with potential impacts on road safety

    Dé-augmentation de la réalité augmentée visuelle

    Get PDF
    We anticipate a future in which people frequently have virtual content displayed in their field of view to augment reality. Situations where this virtual content interferes with users' perception of the physical world will thus be more frequent, with consequences ranging from mere annoyance to serious injuries. We argue for the need to give users agency over virtual augmentations, discussing the concept of de-augmenting augmented reality by selectively removing virtual content from the field of view. De-augmenting lets users target what actually interferes with their perception of the environment while keeping what is of interest. We contribute a framework that captures the different facets of de-augmentation. We discuss what it entails in terms of technical realization and interaction design, and end with three scenarios to illustrate what the user experience could be in a sample of domestic and professional situations.Nous anticipons un avenir dans lequel les gens auront fréquemment du contenu virtuel affiché dans leur champ de vision pour augmenter la réalité. Les situations où ce contenu virtuel interfère avec la perception du monde physique par les utilisateurs seront donc plus fréquentes, avec des conséquences allant du simple désagrément à des accidents graves. Nous soutenons la nécessité de donner aux utilisateurs la possibilité d'agir sur les augmentations virtuelles, en discutant du concept de dé-augmentation de la réalité augmentée par la suppression sélective du contenu virtuel du champ de vision. La dé-augmentation permet aux utilisateurs de cibler ce qui interfère réellement avec leur perception de l'environnement tout en conservant ce qui est intéressant. Nous proposons un cadre qui capture les différentes facettes de la dé-augmentation. Nous discutons de ce que cela implique en termes de réalisation technique et de conception de l'interaction, et nous terminons par trois scénarios pour illustrer ce que pourrait être l'expérience de l'utilisateur dans des situations domestiques et professionnelles

    Advanced Knowledge Application in Practice

    Get PDF
    The integration and interdependency of the world economy leads towards the creation of a global market that offers more opportunities, but is also more complex and competitive than ever before. Therefore widespread research activity is necessary if one is to remain successful on the market. This book is the result of research and development activities from a number of researchers worldwide, covering concrete fields of research
    • …
    corecore