78,517 research outputs found

    Design and Evaluation of Menu Systems for Immersive Virtual Environments

    Get PDF
    Interfaces for system control tasks in virtual environments (VEs) have not been extensively studied. This paper focuses on various types of menu systems to be used in such environments. We describe the design of the TULIP menu, a menu system using Pinch Glovesâ„¢, and compare it to two common alternatives: floating menus and pen and tablet menus. These three menus were compared in an empirical evaluation. The pen and tablet menu was found to be significantly faster, while users had a preference for TULIP. Subjective discomfort levels were also higher with the floating menus and pen and tablet

    Brain-Computer Interface: comparison of two control modes to drive a virtual robot

    Get PDF
    A Brain-Computer Interface (BCI) is a system that enables communication and control that is not based on muscular movements, but on brain activity. Some of these systems are based on discrimination of different mental tasks; usually they match the number of mental tasks to the number of control commands. Previous research at the University of Málaga (UMA-BCI) have proposed a BCI system to freely control an external device, letting the subjects choose among several navigation commands using only one active mental task (versus any other mental activity). Although the navigation paradigm proposed in this system has been proved useful for continuous movements, if the user wants to move medium or large distances, he/she needs to keep the effort of the MI task in order to keep the command. In this way, the aim of this work was to test a navigation paradigm based on the brain-switch mode for ‘forward’ command. In this mode, the subjects used the mental task to switch their state on /off: they stopped if they were moving forward and vice versa. Initially, twelve healthy and untrained subjects participated in this study, but due to a lack of control in previous session, only four subjects participated in the experiment, in which they had to control a virtual robot using two paradigms: one based on continuous mode and another based on switch mode. Preliminary results show that both paradigms can be used to navigate through virtual environments, although with the first one the times needed to complete a path were notably lower.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Considerations in Designing Human-Computer Interfaces for Elderly People

    Get PDF
    As computing devices continue to become more heavily integrated into our lives, proper design of human-computer interfaces becomes a more important topic of discussion. Efficient and useful human-computer interfaces need to take into account the abilities of the humans who will be using such interfaces, and adapt to difficulties that different users may face – such as the difficulties that elderly users must deal with. Interfaces that allow for user-specific customization, while taking into account the multiple difficulties that older users might face, can assist the elderly in properly using these newer computing devices, and in doing so possibly achieving a better quality of life through the advanced technological support that these devices offer. In this paper, we explore common problems the elderly face when using computing devices and solutions developed for these problems. Difficulties ultimately fall into several categories: cognition, auditory, haptic, visual, and motor-based troubles. We also present an idea for a new adaptive operating system with advanced customizations that would simplify computing for older users

    Supporting Data mining of large databases by visual feedback queries

    Get PDF
    In this paper, we describe a query system that provides visual relevance feedback in querying large databases. Our goal is to support the process of data mining by representing as many data items as possible on the display. By arranging and coloring the data items as pixels according to their relevance for the query, the user gets a visual impression of the resulting data set. Using an interactive query interface, the user may change the query dynamically and receives immediate feedback by the visual representation of the resulting data set. Furthermore, by using multiple windows for different parts of a complex query, the user gets visual feedback for each part of the query and, therefore, may easier understand the overall result. Our system allows to represent the largest amount of data that can be visualized on current display technology, provides valuable feedback in querying the database, and allows the user to find results which, otherwise, would remain hidden in the database

    A game-based approach to the teaching of object-oriented programming languages

    Get PDF
    Students often have difficulties when trying to understand the concepts of object-oriented programming (OOP). This paper presents a contribution to the teaching of OOP languages through a game-oriented approach based on the interaction with tangible user interfaces (TUIs). The use of a specific type of commercial distributed TUI (Sifteo cubes), in which several small physical devices have sensing, wireless communication and user-directed output capabilities, is applied to the teaching of the C# programming language, since the operation of these devices can be controlled by user programs written in C#. For our experiment, we selected a sample of students with a sufficient knowledge about procedural programming, which was divided into two groups: The first one had a standard introductory C# course, whereas the second one had an experimental C# course that included, in addition to the contents of the previous one, two demonstration programs that illustrated some OOP basic concepts using the TUI features. Finally, both groups completed two tests: a multiple-choice exam for evaluating the acquisition of basic OOP concepts and a C# programming exercise. The analysis of the results from the tests indicates that the group of students that attended the course including the TUI demos showed a higher interest level (i.e. they felt more motivated) during the course exposition than the one that attended the standard introductory C# course. Furthermore, the students from the experimental group achieved an overall better mark. Therefore, we can conclude that the technological contribution of Sifteo cubes – used as a distributed TUI by which OOP basic concepts are represented in a tangible and a visible way – to the teaching of the C# language has a positive influence on the learning of this language and such basic concepts

    Semi-Automated SVG Programming via Direct Manipulation

    Full text link
    Direct manipulation interfaces provide intuitive and interactive features to a broad range of users, but they often exhibit two limitations: the built-in features cannot possibly cover all use cases, and the internal representation of the content is not readily exposed. We believe that if direct manipulation interfaces were to (a) use general-purpose programs as the representation format, and (b) expose those programs to the user, then experts could customize these systems in powerful new ways and non-experts could enjoy some of the benefits of programmable systems. In recent work, we presented a prototype SVG editor called Sketch-n-Sketch that offered a step towards this vision. In that system, the user wrote a program in a general-purpose lambda-calculus to generate a graphic design and could then directly manipulate the output to indirectly change design parameters (i.e. constant literals) in the program in real-time during the manipulation. Unfortunately, the burden of programming the desired relationships rested entirely on the user. In this paper, we design and implement new features for Sketch-n-Sketch that assist in the programming process itself. Like typical direct manipulation systems, our extended Sketch-n-Sketch now provides GUI-based tools for drawing shapes, relating shapes to each other, and grouping shapes together. Unlike typical systems, however, each tool carries out the user's intention by transforming their general-purpose program. This novel, semi-automated programming workflow allows the user to rapidly create high-level, reusable abstractions in the program while at the same time retaining direct manipulation capabilities. In future work, our approach may be extended with more graphic design features or realized for other application domains.Comment: In 29th ACM User Interface Software and Technology Symposium (UIST 2016

    Dynamic Composite Data Physicalization Using Wheeled Micro-Robots

    Get PDF
    This paper introduces dynamic composite physicalizations, a new class of physical visualizations that use collections of self-propelled objects to represent data. Dynamic composite physicalizations can be used both to give physical form to well-known interactive visualization techniques, and to explore new visualizations and interaction paradigms. We first propose a design space characterizing composite physicalizations based on previous work in the fields of Information Visualization and Human Computer Interaction. We illustrate dynamic composite physicalizations in two scenarios demonstrating potential benefits for collaboration and decision making, as well as new opportunities for physical interaction. We then describe our implementation using wheeled micro-robots capable of locating themselves and sensing user input, before discussing limitations and opportunities for future work

    A Customizable Camera-based Human Computer Interaction System Allowing People With Disabilities Autonomous Hands Free Navigation of Multiple Computing Task

    Full text link
    Many people suffer from conditions that lead to deterioration of motor control and makes access to the computer using traditional input devices difficult. In particular, they may loose control of hand movement to the extent that the standard mouse cannot be used as a pointing device. Most current alternatives use markers or specialized hardware to track and translate a user's movement to pointer movement. These approaches may be perceived as intrusive, for example, wearable devices. Camera-based assistive systems that use visual tracking of features on the user's body often require cumbersome manual adjustment. This paper introduces an enhanced computer vision based strategy where features, for example on a user's face, viewed through an inexpensive USB camera, are tracked and translated to pointer movement. The main contributions of this paper are (1) enhancing a video based interface with a mechanism for mapping feature movement to pointer movement, which allows users to navigate to all areas of the screen even with very limited physical movement, and (2) providing a customizable, hierarchical navigation framework for human computer interaction (HCI). This framework provides effective use of the vision-based interface system for accessing multiple applications in an autonomous setting. Experiments with several users show the effectiveness of the mapping strategy and its usage within the application framework as a practical tool for desktop users with disabilities.National Science Foundation (IIS-0093367, IIS-0329009, 0202067
    • …
    corecore