602 research outputs found

    Contactless Haptic Display Through Magnetic Field Control

    Full text link
    Haptic rendering enables people to touch, perceive, and manipulate virtual objects in a virtual environment. Using six cascaded identical hollow disk electromagnets and a small permanent magnet attached to an operator's finger, this paper proposes and develops an untethered haptic interface through magnetic field control. The concentric hole inside the six cascaded electromagnets provides the workspace, where the 3D position of the permanent magnet is tracked with a Microsoft Kinect sensor. The driving currents of six cascaded electromagnets are calculated in real-time for generating the desired magnetic force. Offline data from an FEA (finite element analysis) based simulation, determines the relationship between the magnetic force, the driving currents, and the position of the permanent magnet. A set of experiments including the virtual object recognition experiment, the virtual surface identification experiment, and the user perception evaluation experiment were conducted to demonstrate the proposed system, where Microsoft HoloLens holographic glasses are used for visual rendering. The proposed magnetic haptic display leads to an untethered and non-contact interface for natural haptic rendering applications, which overcomes the constraints of mechanical linkages in tool-based traditional haptic devices

    Interactive exploration of historic information via gesture recognition

    Get PDF
    Developers of interactive exhibits often struggle to �nd appropriate input devices that enable intuitive control, permitting the visitors to engage e�ectively with the content. Recently motion sensing input devices like the Microsoft Kinect or Panasonic D-Imager have become available enabling gesture based control of computer systems. These devices present an attractive input device for exhibits since the user can interact with their hands and they are not required to physically touch any part of the system. In this thesis we investigate techniques to enable the raw data coming from these types of devices to be used to control an interactive exhibit. Object recognition and tracking techniques are used to analyse the user's hand where movement and clicks are processed. To show the e�ectiveness of the techniques the gesture system is used to control an interactive system designed to inform the public about iconic buildings in the centre of Norwich, UK. We evaluate two methods of making selections in the test environment. At the time of experimentation the technologies were relatively new to the image processing environment. As a result of the research presented in this thesis, the techniques and methods used have been detailed and published [3] at the VSMM (Virtual Systems and Multimedia 2012) conference with the intention of further forwarding the area

    Designing 3D scenarios and interaction tasks for immersive environments

    Get PDF
    In the world of today, immersive reality such as virtual and mixed reality, is one of the most attractive research fields. Virtual Reality, also called VR, has a huge potential to be used in in scientific and educational domains by providing users with real-time interaction or manipulation. The key concept in immersive technologies to provide a high level of immersive sensation to the user, which is one of the main challenges in this field. Wearable technologies play a key role to enhance the immersive sensation and the degree of embodiment in virtual and mixed reality interaction tasks. This project report presents an application study where the user interacts with virtual objects, such as grabbing objects, open or close doors and drawers while wearing a sensory cyberglove developed in our lab (Cyberglove-HT). Furthermore, it presents the development of a methodology that provides inertial measurement unit(IMU)-based gesture recognition. The interaction tasks and 3D immersive scenarios were designed in Unity 3D. Additionally, we developed an inertial sensor-based gesture recognition by employing an Long short-term memory (LSTM) network. In order to distinguish the effect of wearable technologies in the user experience in immersive environments, we made an experimental study comparing the Cyberglove-HT to standard VR controllers (HTC Vive Controller). The quantitive and subjective results indicate that we were able to enhance the immersive sensation and self embodiment with the Cyberglove-HT. A publication resulted from this work [1] which has been developed in the framework of the R&D project Human Tracking and Perception in Dynamic Immersive Rooms (HTPDI

    Development of a user experience enhanced teleoperation approach

    Get PDF
    In this paper, we have investigated various techniques that can be used to enhance user experience for robot teleoperation. In our teleoperation system design, the human operator are provided with both immersive visual feedback and intuitive skill transfer interface such that when controlling a telerobot arm, a user is able to feeļ in a first person perspective in terms of both visual and haptic sense. A number of high-tech devices including Omni haptic joystick, MYO armband, Oculus Rift DK2 headset, and Kinect v2 camera are integrated. The surface electromyography (sEMG) signal allows operator to naturally and efficiently transfer his/her motion skill to the robot, based on the properly designed elastic force feedback. For visual feedback, operators can control the pose of a camera on the head of the robot via the wearable visual headset, such that the operator is able to perceive from the roboţs perspective. Extensive tests have been performed with human subjects to evaluate the design, and the experimental results have shown that superior performance and better user experience have been achieved by the proposed method in comparison with the traditional methods

    Real Time Interactive Presentation Apparatus based on Depth Image Recognition

    Get PDF
    The research on human computer interaction. Human already thinking to overcome the way of interaction towards natural interaction. Kinect is one of the tools that able to provide user with Natural User Interface (NUI). It has capability to track hand gesture and interpret their action according to the depth data stream. The human hand is tracked in point of cloud form and synchronized simultaneously.The method is started by collecting the depth image to be analyzed by random decision forest algorithm. The algorithm will choose set of thresholds and features split, then provide the information of body skeleton. In this project, hand gesture is divided into several actions such as: waiving to right or left toward head position then it will interpret as next or previous slide. The waiving is measured in degree value towards head as center point. Moreover, pushing action will trigger to open new pop up window of specific slide that contain more detailed information. The result of implementations is quite fascinating, user can control the PowerPoint and event able to design the presentation form in different ways. Furthermore, we also present a new way of presentation by presenting WPF form that connected to database for dynamic presentation tool

    Novel haptic interface For viewing 3D images

    Get PDF
    In recent years there has been an explosion of devices and systems capable of displaying stereoscopic 3D images. While these systems provide an improved experience over traditional bidimensional displays they often fall short on user immersion. Usually these systems only improve depth perception by relying on the stereopsis phenomenon. We propose a system that improves the user experience and immersion by having a position dependent rendering of the scene and the ability to touch the scene. This system uses depth maps to represent the geometry of the scene. Depth maps can be easily obtained on the rendering process or can be derived from the binocular-stereo images by calculating their horizontal disparity. This geometry is then used as an input to be rendered in a 3D display, do the haptic rendering calculations and have a position depending render of the scene. The author presents two main contributions. First, since the haptic devices have a finite work space and limited resolution, we used what we call detail mapping algorithms. These algorithms compress geometry information contained in a depth map, by reducing the contrast among pixels, in such a way that it can be rendered into a limited resolution display medium without losing any detail. Second, the unique combination of a depth camera as a motion capturing system, a 3D display and haptic device to enhance user experience. While developing this system we put special attention on the cost and availability of the hardware. We decided to use only off-the-shelf, mass consumer oriented hardware so our experiments can be easily implemented and replicated. As an additional benefit the total cost of the hardware did not exceed the one thousand dollars mark making it affordable for many individuals and institutions

    Multi-touch 3D Exploratory Analysis of Ocean Flow Models

    Get PDF
    Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance
    corecore