18,446 research outputs found

    Factors influencing visual attention switch in multi-display user interfaces: a survey

    Get PDF
    Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin

    From Big Data to Big Displays: High-Performance Visualization at Blue Brain

    Full text link
    Blue Brain has pushed high-performance visualization (HPV) to complement its HPC strategy since its inception in 2007. In 2011, this strategy has been accelerated to develop innovative visualization solutions through increased funding and strategic partnerships with other research institutions. We present the key elements of this HPV ecosystem, which integrates C++ visualization applications with novel collaborative display systems. We motivate how our strategy of transforming visualization engines into services enables a variety of use cases, not only for the integration with high-fidelity displays, but also to build service oriented architectures, to link into web applications and to provide remote services to Python applications.Comment: ISC 2017 Visualization at Scale worksho

    DIVERSE: a Software Toolkit to Integrate Distributed Simulations with Heterogeneous Virtual Environments

    Get PDF
    We present DIVERSE (Device Independent Virtual Environments- Reconfigurable, Scalable, Extensible), which is a modular collection of complimentary software packages that we have developed to facilitate the creation of distributed operator-in-the-loop simulations. In DIVERSE we introduce a novel implementation of remote shared memory (distributed shared memory) that uses Internet Protocol (IP) networks. We also introduce a new method that automatically extends hardware drivers (not in the operating system kernel driver sense) into inter-process and Internet hardware services. Using DIVERSE, a program can display in a CAVE™, ImmersaDesk™, head mounted display (HMD), desktop or laptop without modification. We have developed a method of configuring user programs at run-time by loading dynamic shared objects (DSOs), in contrast to the more common practice of creating interpreted configuration languages. We find that by loading DSOs the development time, complexity and size of DIVERSE and DIVERSE user applications is significantly reduced. Configurations to support different I/O devices, device emulators, visual displays, and any component of a user application including interaction techniques, can be changed at run-time by loading different sets of DIVERSE DSOs. In addition, interpreted run-time configuration parsers have been implemented using DIVERSE DSOs; new ones can be created as needed. DIVERSE is free software, licensed under the terms of the GNU General Public License (GPL) and the GNU Lesser General Public License (LGPL) licenses. We describe the DIVERSE architecture and demonstrate how DIVERSE was used in the development of a specific application, an operator-in-the-loop Navy ship-board crane simulator, which runs unmodified on a desktop computer and/or in a CAVE with motion base motion queuing

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin

    Entry and access : how shareability comes about

    Get PDF
    Shareability is a design principle that refers to how a system, interface, or device engages a group of collocated, co-present users in shared interactions around the same content (or the same object). This is broken down in terms of a set of components that facilitate or constrain the way an interface (or product) is made shareable. Central are the notions of access points and entry points. Entry points invite and entice people into engagement, providing an advance overview, minimal barriers, and a honeypot effect that draws observers into the activity. Access points enable users to join a group's activity, allowing perceptual and manipulative access and fluidity of sharing. We show how these terms can be useful for informing analysis and empirical research

    Cross-display attention switching in mobile interaction with large displays

    Get PDF
    Mobile devices equipped with features (e.g., camera, network connectivity and media player) are increasingly being used for different tasks such as web browsing, document reading and photography. While the portability of mobile devices makes them desirable for pervasive access to information, their small screen real-estate often imposes restrictions on the amount of information that can be displayed and manipulated on them. On the other hand, large displays have become commonplace in many outdoor as well as indoor environments. While they provide an efficient way of presenting and disseminating information, they provide little support for digital interactivity or physical accessibility. Researchers argue that mobile phones provide an efficient and portable way of interacting with large displays, and the latter can overcome the limitations of the small screens of mobile devices by providing a larger presentation and interaction space. However, distributing user interface (UI) elements across a mobile device and a large display can cause switching of visual attention and that may affect task performance. This thesis specifically explores how the switching of visual attention across a handheld mobile device and a vertical large display can affect a single user's task performance during mobile interaction with large displays. It introduces a taxonomy based on the factors associated with the visual arrangement of Multi Display User Interfaces (MDUIs) that can influence visual attention switching during interaction with MDUIs. It presents an empirical analysis of the effects of different distributions of input and output across mobile and large displays on the user's task performance, subjective workload and preference in the multiple-widget selection task, and in visual search tasks with maps, texts and photos. Experimental results show that the selection of multiple widgets replicated on the mobile device as well as on the large display, versus those shown only on the large display, is faster despite the cost of initial attention switching in the former. On the other hand, a hybrid UI configuration where the visual output is distributed across the mobile and large displays is the worst, or equivalent to the worst, configuration in all the visual search tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best (i.e., tied with a mobile-only configuration) in text- and photo-search tasks
    corecore