114,831 research outputs found

    Target Acquisition in Multiscale Electronic Worlds

    Get PDF
    Since the advent of graphical user interfaces, electronic information has grown exponentially, whereas the size of screen displays has stayed almost the same. Multiscale interfaces were designed to address this mismatch, allowing users to adjust the scale at which they interact with information objects. Although the technology has progressed quickly, the theory has lagged behind. Multiscale interfaces pose a stimulating theoretical challenge, reformulating the classic target-acquisition problem from the physical world into an infinitely rescalable electronic world. We address this challenge by extending Fitts’ original pointing paradigm: we introduce the scale variable, thus defining a multiscale pointing paradigm. This article reports on our theoretical and empirical results. We show that target-acquisition performance in a zooming interface must obey Fitts’ law, and more specifically, that target-acquisition time must be proportional to the index of difficulty. Moreover, we complement Fitts’ law by accounting for the effect of view size on pointing performance, showing that performance bandwidth is proportional to view size, up to a ceiling effect. The first empirical study shows that Fitts’ law does apply to a zoomable interface for indices of difficulty up to and beyond 30 bits, whereas classical Fitts’ law studies have been confined in the 2-10 bit range. The second study demonstrates a strong interaction between view size and task difficulty for multiscale pointing, and shows a surprisingly low ceiling. We conclude with implications of these findings for the design of multiscale user interfaces

    Factors influencing visual attention switch in multi-display user interfaces: a survey

    Get PDF
    Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin

    Using Wii technology to explore real spaces via virtual environments for people who are blind

    Get PDF
    Purpose - Virtual environments (VEs) that represent real spaces (RSs) give people who are blind the opportunity to build a cognitive map in advance that they will be able to use when arriving at the RS. Design - In this research study Nintendo Wii based technology was used for exploring VEs via the Wiici application. The Wiimote allows the user to interact with VEs by simulating walking and scanning the space. Finding - By getting haptic and auditory feedback the user learned to explore new spaces. We examined the participants' abilities to explore new simple and complex places, construct a cognitive map, and perform orientation tasks in the RS. Originality – To our knowledge, this finding presents the first virtual environment for people who are blind that allow the participants to scan the environment and by this to construct map model spatial representations

    Refining personal and social presence in virtual meetings

    Get PDF
    Virtual worlds show promise for conducting meetings and conferences without the need for physical travel. Current experience suggests the major limitation to the more widespread adoption and acceptance of virtual conferences is the failure of existing environments to provide a sense of immersion and engagement, or of ‘being there’. These limitations are largely related to the appearance and control of avatars, and to the absence of means to convey non-verbal cues of facial expression and body language. This paper reports on a study involving the use of a mass-market motion sensor (Kinect™) and the mapping of participant action in the real world to avatar behaviour in the virtual world. This is coupled with full-motion video representation of participant’s faces on their avatars to resolve both identity and facial expression issues. The outcomes of a small-group trial meeting based on this technology show a very positive reaction from participants, and the potential for further exploration of these concepts

    Tactons: structured tactile messages for non-visual information display

    Get PDF
    Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including: frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices. This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given

    Space for Two to Think: Large, High-Resolution Displays for Co-located Collaborative Sensemaking

    Get PDF
    Large, high-resolution displays carry the potential to enhance single display groupware collaborative sensemaking for intelligence analysis tasks by providing space for common ground to develop, but it is up to the visual analytics tools to utilize this space effectively. In an exploratory study, we compared two tools (Jigsaw and a document viewer), which were adapted to support multiple input devices, to observe how the large display space was used in establishing and maintaining common ground during an intelligence analysis scenario using 50 textual documents. We discuss the spatial strategies employed by the pairs of participants, which were largely dependent on tool type (data-centric or function-centric), as well as how different visual analytics tools used collaboratively on large, high-resolution displays impact common ground in both process and solution. Using these findings, we suggest design considerations to enable future co-located collaborative sensemaking tools to take advantage of the benefits of collaborating on large, high-resolution displays
    corecore