20,078 research outputs found

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    Piloting Multimodal Learning Analytics using Mobile Mixed Reality in Health Education

    Get PDF
    © 2019 IEEE. Mobile mixed reality has been shown to increase higher achievement and lower cognitive load within spatial disciplines. However, traditional methods of assessment restrict examiners ability to holistically assess spatial understanding. Multimodal learning analytics seeks to investigate how combinations of data types such as spatial data and traditional assessment can be combined to better understand both the learner and learning environment. This paper explores the pedagogical possibilities of a smartphone enabled mixed reality multimodal learning analytics case study for health education, focused on learning the anatomy of the heart. The context for this study is the first loop of a design based research study exploring the acquisition and retention of knowledge by piloting the proposed system with practicing health experts. Outcomes from the pilot study showed engagement and enthusiasm of the method among the experts, but also demonstrated problems to overcome in the pedagogical method before deployment with learners

    Workplace Surfaces as Resource for Social Interactions

    Get PDF
    Space and spatial arrangements play an important role in our everyday social interactions. The way we use and manage our surrounding space is not coincidental, on the contrary, it reflects the way we think, plan and act. Within collaborative contexts, its ability to support social activities makes space an important component of human cognition in the post-cognitive era. As technology designers, we can learn a lot by rigorously understanding the role of space for the purpose of designing collaborative systems. In this paper, we describe an ethnographic study on the use of workplace surfaces in design studios. We introduce the idea of artful surfaces. Artful surfaces are full of informative, inspirational and creative artefacts that help designers accomplish their everyday design practices. The way these surfaces are created and used could provide information about how designers work. Using examples from our fieldwork, we show that artful surfaces have both functional and inspirational characteristics. We indentify four types of artful surfaces: personal, shared, project-specific and live surfaces. We believe that a greater insight into how these artful surfaces are created and used could lead to better design of novel display technologies to support designers' everyday work

    Virtual Texture Generated using Elastomeric Conductive Block Copolymer in Wireless Multimodal Haptic Glove.

    Get PDF
    Haptic devices are in general more adept at mimicking the bulk properties of materials than they are at mimicking the surface properties. This paper describes a haptic glove capable of producing sensations reminiscent of three types of near-surface properties: hardness, temperature, and roughness. To accomplish this mixed mode of stimulation, three types of haptic actuators were combined: vibrotactile motors, thermoelectric devices, and electrotactile electrodes made from a stretchable conductive polymer synthesized in our laboratory. This polymer consisted of a stretchable polyanion which served as a scaffold for the polymerization of poly(3,4-ethylenedioxythiophene) (PEDOT). The scaffold was synthesized using controlled radical polymerization to afford material of low dispersity, relatively high conductivity (0.1 S cm-1), and low impedance relative to metals. The glove was equipped with flex sensors to make it possible to control a robotic hand and a hand in virtual reality (VR). In psychophysical experiments, human participants were able to discern combinations of electrotactile, vibrotactile, and thermal stimulation in VR. Participants trained to associate these sensations with roughness, hardness, and temperature had an overall accuracy of 98%, while untrained participants had an accuracy of 85%. Sensations could similarly be conveyed using a robotic hand equipped with sensors for pressure and temperature

    Human-Computer Interaction for BCI Games: Usability and User Experience

    Get PDF
    Brain-computer interfaces (BCI) come with a lot of issues, such as delays, bad recognition, long training times, and cumbersome hardware. Gamers are a large potential target group for this new interaction modality, but why would healthy subjects want to use it? BCI provides a combination of information and features that no other input modality can offer. But for general acceptance of this technology, usability and user experience will need to be taken into account when designing such systems. This paper discusses the consequences of applying knowledge from Human-Computer Interaction (HCI) to the design of BCI for games. The integration of HCI with BCI is illustrated by research examples and showcases, intended to take this promising technology out of the lab. Future research needs to move beyond feasibility tests, to prove that BCI is also applicable in realistic, real-world settings

    Emotion Detection Using Noninvasive Low Cost Sensors

    Full text link
    Emotion recognition from biometrics is relevant to a wide range of application domains, including healthcare. Existing approaches usually adopt multi-electrodes sensors that could be expensive or uncomfortable to be used in real-life situations. In this study, we investigate whether we can reliably recognize high vs. low emotional valence and arousal by relying on noninvasive low cost EEG, EMG, and GSR sensors. We report the results of an empirical study involving 19 subjects. We achieve state-of-the- art classification performance for both valence and arousal even in a cross-subject classification setting, which eliminates the need for individual training and tuning of classification models.Comment: To appear in Proceedings of ACII 2017, the Seventh International Conference on Affective Computing and Intelligent Interaction, San Antonio, TX, USA, Oct. 23-26, 201

    Empowering and assisting natural human mobility: The simbiosis walker

    Get PDF
    This paper presents the complete development of the Simbiosis Smart Walker. The device is equipped with a set of sensor subsystems to acquire user-machine interaction forces and the temporal evolution of user's feet during gait. The authors present an adaptive filtering technique used for the identification and separation of different components found on the human-machine interaction forces. This technique allowed isolating the components related with the navigational commands and developing a Fuzzy logic controller to guide the device. The Smart Walker was clinically validated at the Spinal Cord Injury Hospital of Toledo - Spain, presenting great acceptability by spinal chord injury patients and clinical staf

    Supervised cross-modal factor analysis for multiple modal data classification

    Full text link
    In this paper we study the problem of learning from multiple modal data for purpose of document classification. In this problem, each document is composed two different modals of data, i.e., an image and a text. Cross-modal factor analysis (CFA) has been proposed to project the two different modals of data to a shared data space, so that the classification of a image or a text can be performed directly in this space. A disadvantage of CFA is that it has ignored the supervision information. In this paper, we improve CFA by incorporating the supervision information to represent and classify both image and text modals of documents. We project both image and text data to a shared data space by factor analysis, and then train a class label predictor in the shared space to use the class label information. The factor analysis parameter and the predictor parameter are learned jointly by solving one single objective function. With this objective function, we minimize the distance between the projections of image and text of the same document, and the classification error of the projection measured by hinge loss function. The objective function is optimized by an alternate optimization strategy in an iterative algorithm. Experiments in two different multiple modal document data sets show the advantage of the proposed algorithm over other CFA methods
    corecore