1,067 research outputs found

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin

    Is movement better? Comparing sedentary and motion-based game controls for older adults

    Get PDF
    Providing cognitive and physical stimulation for older adults is critical for their well-being. Video games offer the opportunity of engaging seniors, and research has shown a variety of positive effects of motion-based video games for older adults. However, little is known about the suitability of motion-based game controls for older adults and how their use is affected by age-related changes. In this paper, we present a study evaluating sedentary and motion-based game controls with a focus on differences between younger and older adults. Our results show that older adults can apply motion-based game controls efficiently, and that they enjoy motion-based interaction. We present design implications based on our study, and demonstrate how our findings can be applied both to motion-based game design and to general interaction design for older adults. Copyright held by authors

    Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

    Get PDF
    In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.Comment: International Conference on Robotics and Automation (ICRA) 2017, http://visual.cs.ucl.ac.uk/pubs/cofusion, https://github.com/martinruenz/co-fusio

    Teaching Introductory Programming Concepts through a Gesture-Based Interface

    Get PDF
    Computer programming is an integral part of a technology driven society, so there is a tremendous need to teach programming to a wider audience. One of the challenges in meeting this demand for programmers is that most traditional computer programming classes are targeted to university/college students with strong math backgrounds. To expand the computer programming workforce, we need to encourage a wider range of students to learn about programming. The goal of this research is to design and implement a gesture-driven interface to teach computer programming to young and non-traditional students. We designed our user interface based on the feedback from students attending the College of Engineering summer camps at the University of Arkansas. Our system uses the Microsoft Xbox Kinect to capture the movements of new programmers as they use our system. Our software then tracks and interprets student hand movements in order to recognize specific gestures which correspond to different programming constructs, and uses this information to create and execute programs using the Google Blockly visual programming framework. We focus on various gesture recognition algorithms to interpret user data as specific gestures, including template matching, sector quantization, and supervised machine learning clustering algorithms

    Dismount Threat Recognition through Automatic Pose Identification

    Get PDF
    The U.S. military has an increased need to rapidly identify nonconventional adversaries. Dismount detection systems are being developed to provide more information on and identify any potential threats. Current work in this area utilizes multispectral imagery to exploit the spectral properties of exposed skin and clothing. These methods are useful in the location and tracking of dismounts, but they do not directly discern a dismount\u27s level of threat. Analyzing the actions that precede hostile events yields information about how the event occurred and uncovers warning signs that are useful in the prediction and prevention of future events. A dismount\u27s posturing, or pose, indicates what he or she is about to do. Pose recognition and identification is a topic of study that can be utilized to discern this threat information. Pose recognition is the process of observing a scene through an imaging device, determining that a dismount is present, identifying the three dimensional (3D) position of the dismount\u27s joints, and evaluating what the current configuration of the joints means. This thesis explores the use of automatic pose recognition to identify threatening poses and postures by means of an artificial neural network. Data are collected utilizing the depth camera and joint estimation software of the Kinect for Xbox 360. A threat determination is made based on the pose identified by the network. Accuracy is measured both by the correct identification of the pose presented to the network, and proper threat discernment. The end network achieved approximately 81% accuracy for threat determination and 55% accuracy for pose identification with test sets of 26 unique poses. Overall, the high level of threat determination accuracy indicates that automatic pose recognition is a promising means of discerning whether a dismount is threatening or not

    Integrated Measurement System of Postural Angle and Electromyography Signal for Manual Materials Handling Assessment

    Get PDF
    Ergonomics practitioners and engineers require an integrated measurement system which allows them to study the interaction of work posture and muscle effort in manual materials handling (MMH) tasks so that strenuous posture and muscle strain can be avoided. However, far too little attention has been paid to develop an integrated measurement system of work posture and muscle activity for assessing MMH tasks. The aim of this study was to develop and test a prototype of integrated system for measuring work posture angles and (electromyography) EMG signals of a worker who doing MMH tasks. The Microsoft Visual Studio software, a 3D camera (Microsoft Kinect), Advancer Technologies muscle sensors and a microcontroller (NI DAQ USB-6000) were applied to develop the integrated postural angle and EMG signal measurement system. Additionally, a graphical user interface was created in the system to enable users to perform work posture and muscle effort assessment simultaneously. Based on the testing results, this study concluded that the patterns of EMG signals are depending on postural angles which consistent with the findings of established works. Further study is required to enhance the validity, reliability and usability of the prototype so that it may facilitate ergonomics practitioners and engineers to assess work posture and muscle effort in MMH task

    Low Cost Open Source Modal Virtual Environment Interfaces Using Full Body Motion Tracking and Hand Gesture Recognition

    Get PDF
    Virtual environments provide insightful and meaningful ways to explore data sets through immersive experiences. One of the ways immersion is achieved is through natural interaction methods instead of only a keyboard and mouse. Intuitive tracking systems for natural interfaces suitable for such environments are often expensive. Recently however, devices such as gesture tracking gloves and skeletal tracking systems have emerged in the consumer market. This project integrates gestural interfaces into an open source virtual reality toolkit using consumer grade input devices and generates a set of tools to enable multimodal gestural interface creation. The AnthroTronix AcceleGlove is used to augment body tracking data from a Microsoft Kinect with fine grained hand gesture data. The tools are found to be useful as a sample gestural interface is implemented using them. The project concludes by suggesting studies targeting gestural interfaces using such devices as well as other areas for further research

    Low Cost Open Source Modal Virtual Environment Interfaces Using Full Body Motion Tracking and Hand Gesture Recognition

    Get PDF
    Virtual environments provide insightful and meaningful ways to explore data sets through immersive experiences. One of the ways immersion is achieved is through natural interaction methods instead of only a keyboard and mouse. Intuitive tracking systems for natural interfaces suitable for such environments are often expensive. Recently however, devices such as gesture tracking gloves and skeletal tracking systems have emerged in the consumer market. This project integrates gestural interfaces into an open source virtual reality toolkit using consumer grade input devices and generates a set of tools to enable multimodal gestural interface creation. The AnthroTronix AcceleGlove is used to augment body tracking data from a Microsoft Kinect with fine grained hand gesture data. The tools are found to be useful as a sample gestural interface is implemented using them. The project concludes by suggesting studies targeting gestural interfaces using such devices as well as other areas for further research

    COFFEE: Context Observer For Fast Enthralling Entertainment

    Get PDF
    Desktops, laptops, smartphones, tablets, and the Kinect, oh my! With so many devices available to the average consumer, the limitations and pitfalls of each interface are becoming more apparent. Swimming in devices, users often have to stop and think about how to interact with each device to accomplish the current tasks at hand. The goal of this thesis is to minimize user cognitive effort in handling multiple devices by creating a context aware hybrid interface. The context aware system will be explored through the hybridization of gesture and touch interfaces using a multi-touch coffee table and the next-generation Microsoft Kinect. Coupling gesture and touch interfaces creates a novel multimodal interface that can leverage the benefits of both gestures and touch. The hybrid interface is able to utilize the more intuitive and dynamic use of gestures, while maintaining the precision of a tactile touch interface. Joining these two interfaces in an intuitive and context aware way will open up a new avenue for design and innovation
    corecore