4,185 research outputs found

    Navigation and interaction in a real-scale digital mock-up using natural language and user gesture

    Get PDF
    This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.FUI CALLISTO-SAR

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd

    An investigation into alternative human-computer interaction in relation to ergonomics for gesture interface design

    Get PDF
    Recent, innovative developments in the field of gesture interfaces as input techniques have the potential to provide a basic, lower-cost, point-and-click function for graphic user interfaces (GUIs). Since these gesture interfaces are not yet widely used, indeed no tilt-based gesture interface is currently on the market, there is neither an international standard for the testing procedure nor a guideline for their ergonomic design and development. Hence, the research area demands more design case studies on a practical basis. The purpose of the research is to investigate the design factors of gesture interfaces for the point-andclick task in the desktop computer environment. The key function of gesture interfaces is to transfer the specific body movement into the cursor movement on the two-dimensional graphical user interface(2D GUI) on a real-time basis, based in particular on the arm movement. The initial literature review identified limitations related to the cursor movement behaviour with gesture interfaces. Since the cursor movement is the machine output of the gesture interfaces that need to be designed, a new accuracy measure based on the calculation of the cursor movement distance and an associated model was then proposed in order to validate the continuous cursor movement. Furthermore, a design guideline with detailed design requirements and specifications for the tilt-based gesture interfaces was suggested. In order to collect the human performance data and the cursor movement distance, a graphical measurement platform was designed and validated with the ordinary mouse. Since there are typically two types of gesture interface, i.e. the sweep-based and the tilt-based, and no commercial tilt-based gesture interface has yet been developed, a commercial sweep-based gesture interface, namely the P5 Glove, was studied and the causes and effects of the discrete cursor movement on the usability was investigated. According to the proposed design guideline, two versions of the tilt-based gesture 3 interface were designed and validated based on an iterative design process. Most of the phenomena and results from the trials undertaken, which are inter-related, were analyzed and discussed. The research has contributed new knowledge through design improvement of tilt-based gesture interfaces and the improvement of the discrete cursor movement by elimination of the manual error compensation. This research reveals that there is a relation between the cursor movement behaviour and the adjusted R 2 for the prediction of the movement time across models expanded from Fitts’ Law. In such a situation, the actual working area and the joint ranges are lengthy and appreciably different from those that had been planned. Further studies are suggested. The research was associated with the University Alliance Scheme technically supported by Freescale Semiconductor Co., U.S

    Gestures in Machine Interaction

    Full text link
    Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual environments by performing actions like grasping, pinching or waving without the need of peripherals. Advances in image-processing and pattern recognition make such interaction viable and in some applications more practical than current modes of keyboard, mouse and touch-screen interaction provide. VGI is emerging as a popular topic amongst Human-Computer Interaction (HCI), Computer-vision and gesture research; and is developing into a topic with potential to significantly impact the future of computer-interaction, robot-control and gaming. This thesis investigates whether an ergonomic model of VGI can be developed and implemented on consumer devices by considering some of the barriers currently preventing such a model of VGI from being widely adopted. This research aims to address the development of freehand gesture interfaces and accompanying syntax. Without the detailed consideration of the evolution of this field the development of un-ergonomic, inefficient interfaces capable of placing undue strain on interface users becomes more likely. In the course of this thesis some novel design and methodological assertions are made. The Gesture in Machine Interaction (GiMI) syntax model and the Gesture-Face Layer (GFL), developed in the course of this research, have been designed to facilitate ergonomic gesture interaction. The GiMI is an interface syntax model designed to enable cursor control, browser navigation commands and steering control for remote robots or vehicles. Through applying state-of-the-art image processing that facilitates three-dimensional (3D) recognition of human action, this research investigates how interface syntax can incorporate the broadest range of human actions. By advancing our understanding of ergonomic gesture syntax, this research aims to assist future developers evaluate the efficiency of gesture interfaces, lexicons and syntax

    Design & Evaluation of an Accessible Hybrid Violin Platform

    Get PDF

    Design of Equipment Rack with TRIZ Method to Reduce Searching Time in Change Over Activity (Case Study : PT. Jans2en Indonesia)

    Get PDF
    Janssen is a manufacturing plant that works in furniture assembly. Component shortages often occurs, it will cause the increase of work in process (WIP) in assembly section. In previous studies, we analyze the root causes with FMEA and then it is resulted that router section is the constraint of the system. There are many non value added activities such as searching and transportation caused by a messy condition of work places and the devices that aren’t put in the right place. The impact is that the time allocated for every change over is higher than before. There are many components that are worked by the router section, so improvements are needed to minimize changes in over time. 5S method and the use of a new design of rack by TRIZ method are suggested for fixing the conditions of work environment. It is expected to eliminate non value added activities and changes in over time. Result shows that we can reduce non value activities in change over of regular components up to 41% and the elimination of this time is 41,6%. The non value activities in changeover of new items is 36,6% and this elimination of time is 53,3%. Key word : change over, kaizen, design, TRIZ metho

    Virtual reality for assembly methods prototyping: a review

    Get PDF
    Assembly planning and evaluation is an important component of the product design process in which details about how parts of a new product will be put together are formalized. A well designed assembly process should take into account various factors such as optimum assembly time and sequence, tooling and fixture requirements, ergonomics, operator safety, and accessibility, among others. Existing computer-based tools to support virtual assembly either concentrate solely on representation of the geometry of parts and fixtures and evaluation of clearances and tolerances or use simulated human mannequins to approximate human interaction in the assembly process. Virtual reality technology has the potential to support integration of natural human motions into the computer aided assembly planning environment (Ritchie et al. in Proc I MECH E Part B J Eng 213(5):461–474, 1999). This would allow evaluations of an assembler’s ability to manipulate and assemble parts and result in reduced time and cost for product design. This paper provides a review of the research in virtual assembly and categorizes the different approaches. Finally, critical requirements and directions for future research are presented
    • …
    corecore