131,071 research outputs found

    The WADER Environment: Facilitating Systematic Design of Touchless Interactions with Wall-sized Displays

    Get PDF
    poster abstractMeeting rooms, design studios, and laboratories in the industry as well as academia are increasingly adopting ultra-large, Wall-Size Displays (WSD). Such adoption is expected only to increase due to the dropping cost of large display technology and the growing need to visualize large volumes of data. To facilitate interaction and collaboration around WSDs, next-generation interaction modalities like touchless have opened up new, unprecedented opportunities. Yet to explore this uncharted design space, there is a lack of controlled, experimental environments that can support rapid and flexible design iterations and user-evaluations of touchless interaction techniques. To address this problem, we propose the Wall Display Experience Research (WADER) environment, a reliable, reusable and easily modifiable experimental environment that supports user studies on touchless interaction prototypes. The current deployment of WADER leverages off-the-shelf markerless sensors, Kinect™ and the 160” X 60”, ultra-high resolution, wall-sized display (15.3 million pixels) available at UITS in IUPUI. By varying design parameters, WADER enables batteries of experiments to be carried out very quickly and efficiently. It evaluates user experience by recording performance metrics. In a time span of one month, we have successfully conducted an 18-participant empirical study to investigate alternate visual feedback designs for touchless selection and movement tasks. During this study, we iteratively designed and incrementally developed prototypes for different design alternatives and conducted eight empirical experiments. In a more-recent RSFG-funded project, HCI researchers are leveraging WADER to explore and evaluate novel interaction techniques to enhance collaboration on WSDs in a context, where users are sitting comfortably at a distance from the display. The establishment of WADER environemnt is a significant step towards fast pacing the iterative design of touchless user interactions for the next-generation of wall-display interfaces

    Chemostratigraphy - A tool for understanding transport processes at the continental margin off West-Africa

    Get PDF
    Continental margins as complex interfaces between continents and ocean basins, display a variety of gravity-driven depositional environments. Understanding the interaction of external and internal control mechanisms of sediment transport processes in these environments is important in order to reconstruct their sedimentary history. This study focuses on the geochemical imprints left in the sediment material and its corresponding fluid phase by gravity-driven sediment events and transport processes. High resolution geochemical investigations of the sediments and their fluids provide a detailed characterization of the material allowing conclusions on possible changes in the depositional environment and the related processes. The chemical composition of pore water may document recent changes in the sedimentation pattern caused by slide events. Modeling fluid concentration profiles helps estimating the event age. Geochemical fingerprinting of turbidites in a chemostratigraphic approach provides a more precise characterization of sediments and corresponding sources, and help facilitate reconstruction of transport pathways

    A new method for interacting with multi-window applications on large, high resolution displays

    Get PDF
    Physically large display walls can now be constructed using off-the-shelf computer hardware. The high resolution of these displays (e.g., 50 million pixels) means that a large quantity of data can be presented to users, so the displays are well suited to visualization applications. However, current methods of interacting with display walls are somewhat time consuming. We have analyzed how users solve real visualization problems using three desktop applications (XmdvTool, Iris Explorer and Arc View), and used a new taxonomy to classify users’ actions and illustrate the deficiencies of current display wall interaction methods. Following this we designed a novel methodfor interacting with display walls, which aims to let users interact as quickly as when a visualization application is used on a desktop system. Informal feedback gathered from our working prototype shows that interaction is both fast and fluid

    Asynchronous displays for multi-UV search tasks

    Get PDF
    Synchronous video has long been the preferred mode for controlling remote robots with other modes such as asynchronous control only used when unavoidable as in the case of interplanetary robotics. We identify two basic problems for controlling multiple robots using synchronous displays: operator overload and information fusion. Synchronous displays from multiple robots can easily overwhelm an operator who must search video for targets. If targets are plentiful, the operator will likely miss targets that enter and leave unattended views while dealing with others that were noticed. The related fusion problem arises because robots' multiple fields of view may overlap forcing the operator to reconcile different views from different perspectives and form an awareness of the environment by "piecing them together". We have conducted a series of experiments investigating the suitability of asynchronous displays for multi-UV search. Our first experiments involved static panoramas in which operators selected locations at which robots halted and panned their camera to capture a record of what could be seen from that location. A subsequent experiment investigated the hypothesis that the relative performance of the panoramic display would improve as the number of robots was increased causing greater overload and fusion problems. In a subsequent Image Queue system we used automated path planning and also automated the selection of imagery for presentation by choosing a greedy selection of non-overlapping views. A fourth set of experiments used the SUAVE display, an asynchronous variant of the picture-in-picture technique for video from multiple UAVs. The panoramic displays which addressed only the overload problem led to performance similar to synchronous video while the Image Queue and SUAVE displays which addressed fusion as well led to improved performance on a number of measures. In this paper we will review our experiences in designing and testing asynchronous displays and discuss challenges to their use including tracking dynamic targets. © 2012 by the American Institute of Aeronautics and Astronautics, Inc
    • …
    corecore