744 research outputs found

    GEMINI: A Generic Multi-Modal Natural Interface Framework for Videogames

    Full text link
    In recent years videogame companies have recognized the role of player engagement as a major factor in user experience and enjoyment. This encouraged a greater investment in new types of game controllers such as the WiiMote, Rock Band instruments and the Kinect. However, the native software of these controllers was not originally designed to be used in other game applications. This work addresses this issue by building a middleware framework, which maps body poses or voice commands to actions in any game. This not only warrants a more natural and customized user-experience but it also defines an interoperable virtual controller. In this version of the framework, body poses and voice commands are respectively recognized through the Kinect's built-in cameras and microphones. The acquired data is then translated into the native interaction scheme in real time using a lightweight method based on spatial restrictions. The system is also prepared to use Nintendo's Wiimote as an auxiliary and unobtrusive gamepad for physically or verbally impractical commands. System validation was performed by analyzing the performance of certain tasks and examining user reports. Both confirmed this approach as a practical and alluring alternative to the game's native interaction scheme. In sum, this framework provides a game-controlling tool that is totally customizable and very flexible, thus expanding the market of game consumers.Comment: WorldCIST'13 Internacional Conferenc

    MolecularRift, a Gesture Based Interaction Tool for Controlling Molecules in 3-D

    Get PDF
    Visualization of molecular models is a vital part in modern drug design. Improved visualization methods increases the conceptual understanding and enables faster and better decision making. The introduction of virtual reality goggles such as Oculus Rift has introduced new opportunities for the capabilities of such visualisations. A new interactive visualization tool (MolecularRift), which lets the user experience molecular models in a virtual reality environment, was developed in collaboration with AstraZeneca. In an attempt to create a more natural way to interact with the tool, users can steer and control molecules through hand gestures. The gestures are recorded using depth data from a Mircosoft Kinect v2 sensor and interpreted using per pixel algorithms, which only focus on the captured frames thus freeing the user from additional devices such as cursor, keyboard, touchpad or even piezoresistive gloves. MolecularRift was developed from a usability perspective using an iterative developing process and test group evaluations. The iterations allowed an agile process where features easily could be evaluated to monitor behavior and performance, resulting in a user-optimized tool. We conclude with reflections on virtual reality's capabilities in chemistry and possibilities for future projects.Virtual reality Ă€r framtiden. Nya tekniker utvecklas konstant och parallellt med att datakapaciteten förbĂ€ttras finner vi nya sĂ€tt att anvĂ€nda dem ihop. Vi har utvecklat ett nytt interaktivt visualiserings verktyg (Molecular Rift) som lĂ„ter anvĂ€ndaren uppleva molekylĂ€ra modeller i en virtuell verklighet. I dagens medicinindustri Ă€r man i stĂ€ndigt behov av nya metoder för att visualisera potentiella lĂ€kemedel i 3-D. Det finns flera verktyg idag som anvĂ€nds för att visualisera molekyler i 3-D stereo. VĂ„ra nyframtagna tekniker inom virtuell verklighet presenterar möjligheter för medicinutvecklare att ”gĂ„ in” i de molekylĂ€ra strukturerna och uppleva dem pĂ„ ett helt nytt sĂ€tt

    Multisensory integration across exteroceptive and interoceptive domains modulates self-experience in the rubber-hand illusion

    Get PDF
    Identifying with a body is central to being a conscious self. The now classic “rubber hand illusion” demonstrates that the experience of body ownership can be modulated by manipulating the timing of exteroceptive(visual and tactile)body-related feedback. Moreover,the strength of this modulation is related to individual differences in sensitivity to internal bodily signals(interoception). However the interaction of exteroceptive and interoceptive signals in determining the experience of body-ownership within an individual remains poorly understood.Here, we demonstrate that this depends on the online integration of exteroceptive and interoceptive signals by implementing an innovative “cardiac rubber hand illusion” that combined computer-generated augmented-reality with feedback of interoceptive (cardiac) information. We show that both subjective and objective measures of virtual-hand ownership are enhanced by cardio-visual feedback in-time with the actual heartbeat,as compared to asynchronous feedback. We further show that these measures correlate with individual differences in interoceptive sensitivity,and are also modulated by the integration of proprioceptive signals instantiated using real-time visual remapping of finger movements to the virtual hand.Our results demonstrate that interoceptive signals directly influence the experience of body ownership via multisensory integration,and they lend support to models of conscious selfhood based on interoceptive predictive coding

    Time Complexity of Color Camera Depth Map Hand Edge Closing Recognition Algorithm

    Get PDF
    The objective of this paper is to calculate the time complexity of the colored camera depth map hand edge closing algorithm of the hand gesture recognition technique. It has been identified as hand gesture recognition through human-computer interaction using color camera and depth map technique, which is used to find the time complexity of the algorithms using 2D minima methods, brute force, and plane sweep. Human-computer interaction is a very much essential component of most people's daily life. The goal of gesture recognition research is to establish a system that can classify specific human gestures and can make its use to convey information for the device control. These methods have different input types and different classifiers and techniques to identify hand gestures. This paper includes the algorithm of one of the hand gesture recognition “Color camera depth map hand edge recognition” algorithm and its time complexity and simulation on MATLAB

    Using data visualization to deduce faces expressions

    Get PDF
    ConferĂȘncia Internacional, realizada na Turquia, de 6-8 de setembro de 2018.Collect and examine in real time multi modal sensor data of a human face, is an important problem in computer vision, with applications in medical and monitoring analysis, entertainment and security. Although its advances, there are still many open issues in terms of the identification of the facial expression. Different algorithms and approaches have been developed to find out patterns and characteristics that can help the automatic expression identification. One way to study data is through data visualizations. Data visualization turns numbers and letters into aesthetically pleasing visuals, making it easy to recognize patterns and find exceptions. In this article, we use information visualization as a tool to analyse data points and find out possible existing patterns in four different facial expressions.info:eu-repo/semantics/publishedVersio

    Gesture Based PC Interface with Kinect Sensor

    Get PDF
    KĂ€esoleva töö eesmargiks on puutevaba kasutajaliidese loomine Kinect sensori alusel. Töö pĂ”himĂ”tte seisneb selles et eemaldada kasutaja ja arvuti vahelisest suhtlusest klaviatuuri, hiirt ja muid mehaanilisi lĂŒliteid vĂ”imaldades juhtida arvutit vaid kĂ€eliigutustega. Töö tulemuseks on vĂ€ljatöötatud kolm tarkvara rakendust. Esimise programmi ettemÀÀratus seisneb selles et juhtida Windows Operatsioon SĂŒsteemi Töölaua kasutades vaid kĂ€eliigutusi (ehk zeste). SĂŒsteemil on olemas (ettenĂ€htud) erireziim presentatsiooni juhtimiseks. Teine programm on loodud puhtalt teaduslike eesmĂ€rkidega et vĂ”imaldada inimkeha luukere liigeste koordinaatide salvestamist reaalajas. Kolmas programm vĂ”imaldab juhtida "Pioneer"-roboti ja manipulaatori kĂ€eliigutuste abil. Töö kuulub inimese ja masina vahelisse koostoime alale ja vĂ”ib leida oma rakendust sellistel aladel, kus mehaaniliste lĂŒlitite peal baseeruva liidese kasutamine on vĂ”imatu vĂ”i raskendatud (nĂ€iteks, kirurgiline meditsiin, kus arsti kindad peavad olema steriilsed).Present research targets development and implementation of the clickless interface on the basis of Kinect Sensor. The idea behind this work is to allow user to explore full functionality of Windows PC equipped with Kinect sensor, using just gestures and eliminating necessity to use any mechanical devices such as mouse or keyboard. Main result of the work are three standalone applications. First implements clickless interface to control Windows Desktop. Second application has purely scientific role and allows to acquire key coordinates of the human skeleton joints in real-time. Third application implements gesture based control of the Pioneer robot with robotic arm. Results of present research belong to the area of human-machine interaction and may found their applications in such areas where usage of mechanical interface elements may be impossible or complicated, for example surgical medicine, where surgeon gloves should be sterile
    • 

    corecore