104 research outputs found

    Sublimate: State-Changing Virtual and Physical Rendering to Augment Interaction with Shape Displays

    Get PDF
    Recent research in 3D user interfaces pushes towards immersive graphics and actuated shape displays. Our work explores the hybrid of these directions, and we introduce sublimation and deposition, as metaphors for the transitions between physical and virtual states. We discuss how digital models, handles and controls can be interacted with as virtual 3D graphics or dynamic physical shapes, and how user interfaces can rapidly and fluidly switch between those representations. To explore this space, we developed two systems that integrate actuated shape displays and augmented reality (AR) for co-located physical shapes and 3D graphics. Our spatial optical see-through display provides a single user with head-tracked stereoscopic augmentation, whereas our handheld devices enable multi-user interaction through video seethrough AR. We describe interaction techniques and applications that explore 3D interaction for these new modalities. We conclude by discussing the results from a user study that show how freehand interaction with physical shape displays and co-located graphics can outperform wand-based interaction with virtual 3D graphics.National Science Foundation (U.S.) (Graduate Research Fellowship Grant 1122374

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    The State of the Art of Spatial Interfaces for 3D Visualization

    Get PDF
    International audienceWe survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under-explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010

    3차원 의료 영상 판독 시선 정보의 대화형 시각적 분석 프레임워크

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 2. 서진욱.We propose an interactive visual analytics framework for diagnostic gaze data on volumetric medical images. The framework is designed to compare gaze data from multiple readers with effective visualizations, which are tailored for volumetric gaze data with additional contextual information. Gaze pattern comparison is essential to understand how radiologists examine medical images and to identify factors influencing the examination. However, prior work on diagnostic gaze data using the medical images acquired from volumetric imaging systems (e.g., computed tomography or magnetic resonance imaging) showed a number of limitations in comparative analysis. In the diagnosis, radiologists scroll through a stack of images to get a 3D cognition of organs and lesions that resulting gaze patterns contain additional depth information compared to the gaze tracking study with 2D stimuli. As a result, the additional spatial dimension aggravated the complexity on visual representation of gaze data. A recent work proposed a visualization design based on direct volume rendering (DVR) for gaze patterns in volumetric imageshowever, effective and comprehensive gaze pattern comparison is still challenging due to lack of interactive visualization tools for comparative gaze analysis. In this paper, we first present an effective visual representation, and propose an interactive analytics framework for multiple volumetric gaze data. We also take the challenge integrating crucial contextual information such as pupil size and windowing (i.e., adjusting brightness and contrast of image) into the analysis process for more in-depth and ecologically valid findings. Among the interactive visualization components, a context-embedded interactive scatterplot (CIS) is especially designed to help users to examine abstract gaze data in diverse contexts by embedding medical imaging representations well-known to radiologists in it. We also present the results from case studies with chest and abdominal radiologistsChapter 1 Introduction 1 1.1 Background 1 1.2 Research Components 5 1.3 Radiological Practice 6 1.4 Organization of the Dissertation 8 Chapter 2 Related Work 9 2.1 Visualization Combining 2D and 3D 9 2.2 Eye Tracking Data Visualization 14 2.3 Comparative Data Analysis 16 2.4 Gaze Analysis in the Medical field 18 Chapter 3 GazeVis: Volumetric Gaze Data 21 3.1 Visualization of Stimuli and Gaze Data 23 3.1.1 Computation of Gaze Field 26 3.1.2 Visualization of Gaze Field 29 3.1.3 Gaze Field for Interactive Information Seeking 30 3.2 Interactions and Dynamic Queries 32 3.2.1 Interaction Design 32 3.2.2 Spatial Filtering 34 3.2.3 Temporal Filtering 34 3.2.4 Transfer Function Control 36 3.2.5 Gaussian Blur Control 38 3.3 Implementation 38 3.4 Evaluation with Radiologists 38 3.4.1 Case Study Protocol 39 3.4.2 Datasets 41 3.4.3 Apparatus 42 3.4.4 Chest Radiologists 42 3.4.5 Abdominal Radiologists 45 3.5 Discussion 49 3.5.1 Spatial Data Structure and Flexibility 49 3.5.2 Interacting with Contextual Data 51 Chapter 4 GazeDx: Interactive Gaze Analysis Framework 53 4.1 Design Rationale 54 4.2 Overviews for Comparative Gaze Analysis 57 4.2.1 Spatial Similarity 57 4.2.2 Qualitative Similarity Overview 58 4.2.3 Multi-level Temporal Overview 60 4.3 In-depth Comparison of Gaze Patterns 65 4.3.1 Detail Views for Individual Readers 65 4.3.2 Aggregation for Group Comparison 67 4.4 CIS: Context-embedded Interactive Scatterplot 68 4.4.1 Flexible Axis Configuration 68 4.4.2 Focus Attention with Familiar Representations 69 4.4.3 Scatterplot Matrix with CIS 72 4.5 Interactive Selection and Filtering 72 4.5.1 Selection by Freehand Drawing 73 4.5.2 Selection by Human Anatomy 74 4.6 Implementation 76 4.7 Case Studies 77 4.7.1 Case Study Protocol 78 4.7.2 Apparatus 80 4.7.3 Case Study 1: Chest Radiologists 81 4.7.4 Case Study 2: Abdominal Radiologists 85 4.8 Discussion 88 Chapter 5 Conclusion 91 Bibliography 94 Abstract in Korean 105Docto

    Authoring Tools for Augmented Reality Scenario Based Training Experiences

    Get PDF
    Augmented Reality\u27s (AR) scope and capabilities have grown considerably in the last few years. AR applications can be run across devices such as phones, wearables, and head-mounted displays (HMDs). The increasing research and commercial efforts in HMDs capabilities allow end users to map a 3D environment and interact with virtual objects that can respond to the physical aspects of the scene. Within this context, AR is an ideal format for in-situ training scenarios. However, building such AR scenarios requires proficiency in game engine development environments and programming expertise. These difficulties can make it challenging for domain experts to create training content in AR. To combat this problem, this thesis presents strategies and guidelines for building authoring tools to generate scenario-based training experiences in AR. The authoring tools were built leveraging concepts from the 3D user interfaces and interaction techniques literature. We found from early research in the field and our experimentation that scenario and object behavior authoring are substantial aspects needed to create a training experience by an author. This work also presents a technique to author object component behaviors with high usability scores, followed by an analysis of the different aspects of authoring object component behaviors across AR, VR, and Desktop. User studies were run to evaluate authoring strategies, and the results provide insights into future directions for building AR/VR immersive authoring tools. Finally, we discuss how this knowledge can influence the development, guidelines, and strategies in the direction of a more compelling set of tools to author augmented reality SBT experiences

    Designing for Effective Freehand Gestural Interaction

    Get PDF
    corecore