3,395 research outputs found

    Touching the invisible: Localizing ultrasonic haptic cues

    Get PDF
    While mid-air gestures offer new possibilities to interact with or around devices, some situations, such as interacting with applications, playing games or navigating, may require visual attention to be focused on a main task. Ultrasonic haptic feedback can provide 3D spatial haptic cues that do not demand visual attention for these contexts. In this paper, we present an initial study of active exploration of ultrasonic haptic virtual points that investigates the spatial localization with and without the use of the visual modality. Our results show that, when providing haptic feedback giving the location of a widget, users perform 50% more accurately compared to providing visual feedback alone. When provided with a haptic location of a widget alone, users are more than 30% more accurate than when given a visual location. When aware of the location of the haptic feedback, active exploration decreased the minimum recommended widget size from 2cm2 to 1cm2 when compared to passive exploration from previous studies. Our results will allow designers to create better mid-air interactions using this new form of haptic feedback

    Multi-Person Motion Tracking via RF Body Reflections

    Get PDF
    Recently, we have witnessed the emergence of technologies that can localize a user and track her gestures based purely on radio reflections off the person's body. These technologies work even if the user is behind a wall or obstruction. However, for these technologies to be fully practical, they need to address major challenges such as scaling to multiple people, accurately localizing them and tracking their gestures, and localizing static users as opposed to requiring the user to move to be detectable. This paper presents WiZ, the first multi-person centimeter-scale motion tracking system that pinpoints people's locations based purely on RF reflections off their bodies. WiZ can also locate static users by sensing minute changes in their RF reflections due to breathing. Further, it can track concurrent gestures made by different individuals, even when they carry no wireless device on them. We implement a prototype of WiZ and show that it can localize up to five users each with a median accuracy of 8-18 cm and 7-11 cm in the x and y dimensions respectively. WiZ can also detect 3D pointing gestures of multiple users with a median orientation error of 8 -16 degrees for each of them. Finally, WiZ can track breathing motion and output the breath count of multiple people with high accuracy

    Classification of Humans into Ayurvedic Prakruti Types using Computer Vision

    Get PDF
    Ayurveda, a 5000 years old Indian medical science, believes that the universe and hence humans are made up of five elements namely ether, fire, water, earth, and air. The three Doshas (Tridosha) Vata, Pitta, and Kapha originated from the combinations of these elements. Every person has a unique combination of Tridosha elements contributing to a person’s ‘Prakruti’. Prakruti governs the physiological and psychological tendencies in all living beings as well as the way they interact with the environment. This balance influences their physiological features like the texture and colour of skin, hair, eyes, length of fingers, the shape of the palm, body frame, strength of digestion and many more as well as the psychological features like their nature (introverted, extroverted, calm, excitable, intense, laidback), and their reaction to stress and diseases. All these features are coded in the constituents at the time of a person’s creation and do not change throughout their lifetime. Ayurvedic doctors analyze the Prakruti of a person either by assessing the physical features manually and/or by examining the nature of their heartbeat (pulse). Based on this analysis, they diagnose, prevent and cure the disease in patients by prescribing precision medicine. This project focuses on identifying Prakruti of a person by analysing his facial features like hair, eyes, nose, lips and skin colour using facial recognition techniques in computer vision. This is the first of its kind research in this problem area that attempts to bring image processing into the domain of Ayurveda

    Virtual acoustics displays

    Get PDF
    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events

    This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer

    Full text link
    We address the problem of controlling the workspace of a 3-DoF mobile robot. In a human-robot shared space, robots should navigate in a human-acceptable way according to the users' demands. For this purpose, we employ virtual borders, that are non-physical borders, to allow a user the restriction of the robot's workspace. To this end, we propose an interaction method based on a laser pointer to intuitively define virtual borders. This interaction method uses a previously developed framework based on robot guidance to change the robot's navigational behavior. Furthermore, we extend this framework to increase the flexibility by considering different types of virtual borders, i.e. polygons and curves separating an area. We evaluated our method with 15 non-expert users concerning correctness, accuracy and teaching time. The experimental results revealed a high accuracy and linear teaching time with respect to the border length while correctly incorporating the borders into the robot's navigational map. Finally, our user study showed that non-expert users can employ our interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI

    Sub-Nanosecond Time of Flight on Commercial Wi-Fi Cards

    Full text link
    Time-of-flight, i.e., the time incurred by a signal to travel from transmitter to receiver, is perhaps the most intuitive way to measure distances using wireless signals. It is used in major positioning systems such as GPS, RADAR, and SONAR. However, attempts at using time-of-flight for indoor localization have failed to deliver acceptable accuracy due to fundamental limitations in measuring time on Wi-Fi and other RF consumer technologies. While the research community has developed alternatives for RF-based indoor localization that do not require time-of-flight, those approaches have their own limitations that hamper their use in practice. In particular, many existing approaches need receivers with large antenna arrays while commercial Wi-Fi nodes have two or three antennas. Other systems require fingerprinting the environment to create signal maps. More fundamentally, none of these methods support indoor positioning between a pair of Wi-Fi devices without~third~party~support. In this paper, we present a set of algorithms that measure the time-of-flight to sub-nanosecond accuracy on commercial Wi-Fi cards. We implement these algorithms and demonstrate a system that achieves accurate device-to-device localization, i.e. enables a pair of Wi-Fi devices to locate each other without any support from the infrastructure, not even the location of the access points.Comment: 14 page

    Handling discourse: Gestures, reference tracking, and communication strategies in early L2

    Get PDF
    The production of cohesive discourse, especially maintained reference, poses problems for early second language (L2) speakers. This paper considers a communicative account of overexplicit L2 discourse by focusing on the interdependence between spoken and gestural cohesion, the latter being expressed by anchoring of referents in gesture space. Specifically, this study investigates whether overexplicit maintained reference in speech (lexical noun phrases [NPs]) and gesture (anaphoric gestures) constitutes an interactional communication strategy. We examine L2 speech and gestures of 16 Dutch learners of French retelling stories to addressees under two visibility conditions. The results indicate that the overexplicit properties of L2 speech are not motivated by interactional strategic concerns. The results for anaphoric gestures are more complex. Although their presence is not interactionall
    corecore