6,634 research outputs found

    TransparentHMD: Revealing the HMD User's Face to Bystanders

    Get PDF
    While the eyes are very important in human communication, once a user puts on a head mounted display (HMD), the face is obscured from the outside world's perspective. This leads to communication problems when bystanders approach or collaborate with an HMD user. We introduce transparentHMD, which employs a head-coupled perspective technique to produce an illusion of a transparent HMD to bystanders. We created a self contained system, based on a mobile device mounted on the HMD with the screen facing bystanders. By tracking the relative position of the bystander using the smartphone's camera, we render an adapting perspective view in realtime that creates the illusion of a transparent HMD. By revealing the user's face to bystanders, our easy to implement system allows for opportunities to investigate a plethora of research questions particularly related to collaborative VR systems

    DolphinAtack: Inaudible Voice Commands

    Full text link
    Speech recognition (SR) systems such as Siri or Google Now have become an increasingly popular human-computer interaction method, and have turned various systems into voice controllable systems(VCS). Prior work on attacking VCS shows that the hidden voice commands that are incomprehensible to people can control the systems. Hidden voice commands, though hidden, are nonetheless audible. In this work, we design a completely inaudible attack, DolphinAttack, that modulates voice commands on ultrasonic carriers (e.g., f > 20 kHz) to achieve inaudibility. By leveraging the nonlinearity of the microphone circuits, the modulated low frequency audio commands can be successfully demodulated, recovered, and more importantly interpreted by the speech recognition systems. We validate DolphinAttack on popular speech recognition systems, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions. We validate that it is feasible to detect DolphinAttack by classifying the audios using supported vector machine (SVM), and suggest to re-design voice controllable systems to be resilient to inaudible voice command attacks.Comment: 15 pages, 17 figure
    • …
    corecore