173 research outputs found

    Embodied geosensification-models, taxonomies and applications for engaging the body in immersive analytics of geospatial data

    Get PDF
    This thesis examines how we can use immersive multisensory displays and body-focused interaction technologies to analyze geospatial data. It merges relevant aspects from an array of interdisciplinary research areas, from cartography to the cognitive sciences, to form three taxonomies that describe the senses, data representations, and interactions made possible by these technologies. These taxonomies are then integrated into an overarching design model for such "Embodied Geosensifications". This model provides guidance for system specification and is validated with practical examples

    Interactive mixed reality media with real time 3D human capture

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Spatial integration in computer-augmented realities

    Get PDF
    In contrast to virtual reality, which immerses the user in a wholly computergenerated perceptual environment, augmented reality systems superimpose virtual entities on the user's view of the real world. This concept promises to fulfil new applications in a wide range of fields, but there are some challenging issues to be resolved. One issue relates to achieving accurate registration of virtual and real worlds. Accurate spatial registration is not only required with respect to lateral positioning, but also in depth. A limiting problem with existing optical-see-through displays, typically used for augmenting reality, is that they are incapable of displaying a full range of depth cues. Most significantly, they are unable to occlude real background and hence cannot produce interposition depth cueing. Neither are they able to modify the real-world view in the ways required to produce convincing common illumination effects such as virtual shadows across real surfaces. Also, at present, there are no wholly satisfactory ways of determining suitable common illumination models with which to determine the real-virtual light interactions necessary for producing such depth cues. This thesis establishes that interpositioning is essential for appropriate estimation of depth in augmented realities, and that the presence of shadows provides an important refining cue. It also extends the concept of a transparency alpha-channel to allow optical-see-through systems to display appropriate depth cues. The generalised theory of the approach is described mathematically and algorithms developed to automate generation of display-surface images. Three practical physical display strategies are presented; using a transmissive mask, selective lighting using digital projection, and selective reflection using digital micromirror devices. With respect to obtaining a common illumination model, all current approaches require either . prior knowledge of the light sources illuminating the real scene, or involve inserting some kind of probe into the scene with which to determine real light source position, shape, and intensity. This thesis presents an alternative approach that infers a plausible illumination from a limited view of the scene.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum

    Designing multi-sensory displays for abstract data

    Get PDF
    The rapid increase in available information has lead to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays for perceptual data mining. This approach allows a domain expert to search the data for useful relationships and can be effective when automated rules are hard to define. However, designing models of the abstract data and defining appropriate displays are critical tasks in building a useful system. Designing displays of abstract data is especially difficult when multi-sensory interaction is considered. New technology, such as Virtual Environments, enables such multi-sensory interaction. For example, interfaces can be designed that immerse the user in a 3D space and provide visual, auditory and haptic (tactile) feedback. It has been a goal of Virtual Environments to use multi-sensory interaction in an attempt to increase the human-to-computer bandwidth. This approach may assist the user to understand large information spaces and find patterns in them. However, while the motivation is simple enough, actually designing appropriate mappings between the abstract information and the human sensory channels is quite difficult. Designing intuitive multi-sensory displays of abstract data is complex and needs to carefully consider human perceptual capabilities, yet we interact with the real world everyday in a multi-sensory way. Metaphors can describe mappings between the natural world and an abstract information space. This thesis develops a division of the multi-sensory design space called the MS-Taxonomy. The MS-Taxonomy provides a concept map of the design space based on temporal, spatial and direct metaphors. The detailed concepts within the taxonomy allow for discussion of low level design issues. Furthermore the concepts abstract to higher levels, allowing general design issues to be compared and discussed across the different senses. The MS-Taxonomy provides a categorisation of multi-sensory design options. However, to design effective multi-sensory displays requires more than a thorough understanding of design options. It is also useful to have guidelines to follow, and a process to describe the design steps. This thesis uses the structure of the MS-Taxonomy to develop the MS-Guidelines and the MS-Process. The MS-Guidelines capture design recommendations and the problems associated with different design choices. The MS-Process integrates the MS-Guidelines into a methodology for developing and evaluating multi-sensory displays. A detailed case study is used to validate the MS-Taxonomy, the MS-Guidelines and the MS-Process. The case study explores the design of multi-sensory displays within a domain where users wish to explore abstract data for patterns. This area is called Technical Analysis and involves the interpretation of patterns in stock market data. Following the MS-Process and using the MS-Guidelines some new multi-sensory displays are designed for pattern detection in stock market data. The outcome from the case study includes some novel haptic-visual and auditory-visual designs that are prototyped and evaluated

    Visualisation and dynamic querying of large multivariate data sets

    Get PDF
    The legitimacy and effectiveness of current methods and theories that guide the construction of visualisations is in question and there is a lack of any scientific support for many of these methods. A review of existing visualisation techniques demonstrates some of the innate strengths and weaknesses within the approaches used. By focusing on the more specific task of developing visualisations for large sets of multivariate data, the lack of any kind of guidance in this development process is acknowledged. A prototype visualisation tool based on the well-documented techniques of Parallel Coordinates and Dynamic Queries has been developed taking into account these findings. Incorporating new and novel ideas addressing identified weaknesses in current visualisations, this prototype also provides the basis for demonstrating, testing and evaluating these concepts

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore