2,198 research outputs found

    Reconstructing passively travelled manoeuvres: Visuo-vestibular interactions.

    Get PDF
    We recently published a study of the reconstruction of passively travelled trajectories from optic flow. Perception was prone to illusions in a number of conditions, and not always veridical in the other conditions. Part of the illusionary reconstructed trajectories could be explained if we assume that the subjects based their reconstruction on the ego-motion percept obtained during the stimulus' initial moments. In the current paper, we test this hypothesis using a novel paradigm. If indeed the final reconstruction is governed by the initial percept, then additional, extra-retinal information that modifies the initial percept should predictably alter the final reconstruction. We supplied extra-retinal stimuli tuned to supplement the information that was underrepresented or ambiguous in the optic flow: the subjects were physically displaced or rotated at the onset of the visual stimulus. A highly asymmetric velocity profile (high acceleration, very low deceleration) was used. Subjects were required to guide an input device (in the form of a model vehicle; we measured position and orientation) along the perceived trajectory. We show for the first time that a vestibular stimulus of short duration can influence the perception of a much longer lasting visual stimulus. Perception of the ego-motion translation component in the visual stimulus was improved by a linear physical displacement; perception of the ego-motion rotation component by a physical rotation. This led to a more veridical reconstruction in some conditions, but it could also lead to less veridical reconstructions in other conditions

    Visuo-vestibular interaction in the reconstruction of travelled trajectories

    Get PDF
    We recently published a study of the reconstruction of passively travelled trajectories from optic flow. Perception was prone to illusions in a number of conditions, and not always veridical in the others. Part of the illusionary reconstructed trajectories could be explained by assuming that subjects base their reconstruction on the ego-motion percept built during the stimulus' initial moments . In the current paper, we test this hypothesis using a novel paradigm: if the final reconstruction is governed by the initial percept, providing additional, extra-retinal information that modifies the initial percept should predictably alter the final reconstruction. The extra-retinal stimulus was tuned to supplement the information that was under-represented or ambiguous in the optic flow: the subjects were physically displaced or rotated at the onset of the visual stimulus. A highly asymmetric velocity profile (high acceleration, very low deceleration) was used. Subjects were required to guide an input device (in the form of a model vehicle; we measured position and orientation) along the perceived trajectory. We show for the first time that a vestibular stimulus of short duration can influence the perception of a much longer lasting visual stimulus. Perception of the ego-motion translation component in the visual stimulus was improved by a linear physical displacement: perception of the ego-motion rotation component by a physical rotation. This led to a more veridical reconstruction in some conditions, but to a less veridical reconstruction in other conditions

    Visuo-vestibular interaction in the reconstruction of travelled trajectories

    Get PDF
    We recently published a study of the reconstruction of passively travelled trajectories from optic flow. Perception was prone to illusions in a number of conditions, and not always veridical in the others. Part of the illusionary reconstructed trajectories could be explained by assuming that subjects base their reconstruction on the ego-motion percept built during the stimulus' initial moments . In the current paper, we test this hypothesis using a novel paradigm: if the final reconstruction is governed by the initial percept, providing additional, extra-retinal information that modifies the initial percept should predictably alter the final reconstruction. The extra-retinal stimulus was tuned to supplement the information that was under-represented or ambiguous in the optic flow: the subjects were physically displaced or rotated at the onset of the visual stimulus. A highly asymmetric velocity profile (high acceleration, very low deceleration) was used. Subjects were required to guide an input device (in the form of a model vehicle; we measured position and orientation) along the perceived trajectory. We show for the first time that a vestibular stimulus of short duration can influence the perception of a much longer lasting visual stimulus. Perception of the ego-motion translation component in the visual stimulus was improved by a linear physical displacement: perception of the ego-motion rotation component by a physical rotation. This led to a more veridical reconstruction in some conditions, but to a less veridical reconstruction in other conditions

    PILOT: Password and PIN Information Leakage from Obfuscated Typing Videos

    Full text link
    This paper studies leakage of user passwords and PINs based on observations of typing feedback on screens or from projectors in the form of masked characters that indicate keystrokes. To this end, we developed an attack called Password and Pin Information Leakage from Obfuscated Typing Videos (PILOT). Our attack extracts inter-keystroke timing information from videos of password masking characters displayed when users type their password on a computer, or their PIN at an ATM. We conducted several experiments in various attack scenarios. Results indicate that, while in some cases leakage is minor, it is quite substantial in others. By leveraging inter-keystroke timings, PILOT recovers 8-character alphanumeric passwords in as little as 19 attempts. When guessing PINs, PILOT significantly improved on both random guessing and the attack strategy adopted in our prior work [4]. In particular, we were able to guess about 3% of the PINs within 10 attempts. This corresponds to a 26-fold improvement compared to random guessing. Our results strongly indicate that secure password masking GUIs must consider the information leakage identified in this paper

    A stereo display prototype with multiple focal distances

    Full text link
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee

    Correcting for optical aberrations using multilayer displays

    Get PDF
    Optical aberrations of the human eye are currently corrected using eyeglasses, contact lenses, or surgery. We describe a fourth option: modifying the composition of displayed content such that the perceived image appears in focus, after passing through an eye with known optical defects. Prior approaches synthesize pre-filtered images by deconvolving the content by the point spread function of the aberrated eye. Such methods have not led to practical applications, due to severely reduced contrast and ringing artifacts. We address these limitations by introducing multilayer pre-filtering, implemented using stacks of semi-transparent, light-emitting layers. By optimizing the layer positions and the partition of spatial frequencies between layers, contrast is improved and ringing artifacts are eliminated. We assess design constraints for multilayer displays; autostereoscopic light field displays are identified as a preferred, thin form factor architecture, allowing synthetic layers to be displaced in response to viewer movement and refractive errors. We assess the benefits of multilayer pre-filtering versus prior light field pre-distortion methods, showing pre-filtering works within the constraints of current display resolutions. We conclude by analyzing benefits and limitations using a prototype multilayer LCD.National Science Foundation (U.S.) (Grant IIS-1116452)Alfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (Young Faculty Award)Vodafone (Firm) (Wireless Innovation Award

    Audio Beat Detection with Application to Robot Drumming

    Get PDF
    This Drumming Robot thesis demonstrates the design of a robot which can play drums in rhythm to an external audio source. The audio source can be either a pre-recorded .wav file or a live sample .wav file from a microphone. The dominant beats-per-minute (BPM) of the audio would be extracted and the robot would drum in time to the BPM. A Fourier Analysis-based BPM detection algorithm, developed by Eric Scheirer (Tempo and beat analysis of acoustical musical signals)i was adopted and implemented. In contrast to other popular algorithms, the main advantage of Scheirer\u27s algorithm is it has no prerequisite to decompose the audio information into notes beforehand and can therefore be automated. In contrast, the McKinney and Breebaart feature set detection and classification method has a result that typifies music genre into static features and is not suitable for real time control of a robot (Features for Audio and Music Classification)ii. A host computer inputs audio from the environment (via microphone) and extracts the BPM data with the Scheirer algorithm to be sent to a robot controller. A commercially available robot controller was used to control the Drumming Robot servo motors and to interface with the host. The robot motion control task and the input audio BPM detection task are purposely separated in this implementation. One advantage is that each task could be developed independently. However, the main advantage of this approach is to create a generic interface between Input Logic and Robot Control functions, so each could be used independently for application to other robots or control systems. Extracted BPM data is useful not for just the Drumming Robot but for any robotic system that interacts in real time with the sound environment, such as dancing robots. By the same token, the Drumming Robot can be controlled by any BPM information source, if the control signals are compatible. The Robot Theater at Portland State University features animated robots with the goal of performing music and acting out scenes for the entertainment of the audience passing through the halls of the FAB building. The Robot Drummer idea was conceived following the construction of a Handshaking Robot class project involving the DIM robot located in the PSU Robot Theater. By adding a second arm to the DIM torso and powering movement by servo motors and a robot controller, the motions of drumming could be performed for the Robot Theater. Audience members could play music, clap or otherwise make rhythmic sounds and a microphone would input the audio to be processed to control the motion of the Drumming Robot
    • …
    corecore