10,131 research outputs found

    Sensing with the Motor Cortex

    Get PDF
    The primary motor cortex is a critical node in the network of brain regions responsible for voluntary motor behavior. It has been less appreciated, however, that the motor cortex exhibits sensory responses in a variety of modalities including vision and somatosensation. We review current work that emphasizes the heterogeneity in sensorimotor responses in the motor cortex and focus on its implications for cortical control of movement as well as for brain-machine interface development

    Active PinScreen: Exploring Spatio-Temporal Tactile Feedbackfor Multi-Finger Interaction

    Get PDF
    Multiple fingers are often used for efficient interaction with handheld computing devices. Currently, any tactile feedback provided is felt on the finger pad or the palm with coarse granularity. In contrast, we present a new tactile feedback technique, Active PinScreen, that applies localised stimuli on multiple fingers with fine spatial and temporal resolution. The tactile screen uses an array of solenoid-actuated magnetic pins with millimetre scale form-factor which could be deployed for back-of-device handheld use without instrumenting the user. As well as presenting a detailed description of the prototype, we provide the potential design configurations and the applications of the Active PinScreen and evaluate the human factors of tactile interaction with multiple fingers in a controlled user evaluation. The results of our study show a high recognition rate for directional and patterned stimulation across different grip orientations as well as within- and between- fingers. We end the paper with a discussion of our main findings, limitations in the current design and directions for future work

    Doctor of Philosophy

    Get PDF
    dissertationThe study of haptic interfaces focuses on the use of the sense of touch in human-machine interaction. This document presents a detailed investigation of lateral skin stretch at the fingertip as a means of direction communication. Such tactile communication has applications in a variety of situations where traditional audio and visual channels are inconvenient, unsafe, or already saturated. Examples include handheld consumer electronics, where tactile communication would allow a user to control a device without having to look at it, or in-car navigation systems, where the audio and visual directions provided by existing GPS devices can distract the driver's attention away from the road. Lateral skin stretch, the displacement of the skin of the fingerpad in a plane tangent to the fingerpad, is a highly effective means of communicating directional information. Users are able to correctly identify the direction of skin stretch stimuli with skin displacements as small as 0.1 mm at rates as slow as 2 mm/s. Such stimuli can be rendered by a small, portable device suitable for integration into handheld devices. The design of the device-finger interface affects the ability of the user to perceive the stimuli accurately. A properly designed conical aperture effectively constrains the motion of the finger and provides an interface that is practical for use in handheld devices. When a handheld device renders directional tactile cues on the fingerpad, the user must often mentally rotate those cues from the reference frame of the finger to the world-centered reference frame where those cues are to be applied. Such mental rotation incurs a cognitive cost, requiring additional time to mentally process the stimuli. The magnitude of these cognitive costs is a function of the angle of rotation, and of the specific orientations of the arm, wrist and finger. Even with the difficulties imposed by required mental rotations, lateral skin stretch is a promising means of communicating information using the sense of touch with potential to substantially improve certain types of human-machine interaction

    An oscillatory interference model of grid cell firing

    Get PDF
    We expand upon our proposal that the oscillatory interference mechanism proposed for the phase precession effect in place cells underlies the grid-like firing pattern of dorsomedial entorhinal grid cells (O'Keefe and Burgess (2005) Hippocampus 15:853-866). The original one-dimensional interference model is generalized to an appropriate two-dimensional mechanism. Specifically, dendritic subunits of layer 11 medial entorhinal stellate cells provide multiple linear interference patterns along different directions, with their product determining the firing of the cell. Connection of appropriate speed- and direction- dependent inputs onto dendritic subunits could result from an unsupervised learning rule which maximizes postsynaptic firing (e.g. competitive learning). These inputs cause the intrinsic oscillation of subunit membrane potential to. increase above theta frequency by an amount proportional to the animal's speed of running in the "preferred" direction. The phase difference between this oscillation and a somatic input at theta-frequency essentially integrates velocity so that the interference of the two oscillations reflects distance traveled in the preferred direction. The overall grid pattern is maintained in environmental location by phase reset of the grid cell by place cells receiving sensory input from the environment, and environmental boundaries in particular. We also outline possible variations on the basic model, including the generation of grid-like firing via the interaction of multiple cells rather than via multiple dendritic subunits. Predictions of the interference model are given for the frequency composition of EEG power spectra and temporal autocorrelograms of grid cell firing as functions of the speed and direction of running and the novelty of the environment. (C) 2007 Wiley-Liss, Inc

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Augmenting the Spatial Perception Capabilities of Users Who Are Blind

    Get PDF
    People who are blind face a series of challenges and limitations resulting from their lack of being able to see, forcing them to either seek the assistance of a sighted individual or work around the challenge by way of a inefficient adaptation (e.g. following the walls in a room in order to reach a door rather than walking in a straight line to the door). These challenges are directly related to blind users' lack of the spatial perception capabilities normally provided by the human vision system. In order to overcome these spatial perception related challenges, modern technologies can be used to convey spatial perception data through sensory substitution interfaces. This work is the culmination of several projects which address varying spatial perception problems for blind users. First we consider the development of non-visual natural user interfaces for interacting with large displays. This work explores the haptic interaction space in order to find useful and efficient haptic encodings for the spatial layout of items on large displays. Multiple interaction techniques are presented which build on prior research (Folmer et al. 2012), and the efficiency and usability of the most efficient of these encodings is evaluated with blind children. Next we evaluate the use of wearable technology in aiding navigation of blind individuals through large open spaces lacking tactile landmarks used during traditional white cane navigation. We explore the design of a computer vision application with an unobtrusive aural interface to minimize veering of the user while crossing a large open space. Together, these projects represent an exploration into the use of modern technology in augmenting the spatial perception capabilities of blind users

    Tissue-conducted spatial sound fields

    Get PDF
    We describe experiments using multiple cranial transducers to achieve auditory spatial perceptual impressions via bone (BC) and tissue conduction (TC), bypassing the peripheral hearing apparatus. This could be useful in cases of peripheral hearing damage or where ear-occlusion is undesirable. Previous work (e.g. Stanley and Walker 2006, MacDonald and Letowski 2006)1,2 indicated robust lateralization is feasible via tissue conduction. We have utilized discrete signals, stereo and first order ambisonics to investigate control of externalization, range, direction in azimuth and elevation, movement and spaciousness. Early results indicate robust and coherent effects. Current technological implementations are presented and potential development paths discussed

    A Hybrid Visual Control Scheme to Assist the Visually Impaired with Guided Reaching Tasks

    Get PDF
    In recent years, numerous researchers have been working towards adapting technology developed for robotic control to use in the creation of high-technology assistive devices for the visually impaired. These types of devices have been proven to help visually impaired people live with a greater degree of confidence and independence. However, most prior work has focused primarily on a single problem from mobile robotics, namely navigation in an unknown environment. In this work we address the issue of the design and performance of an assistive device application to aid the visually-impaired with a guided reaching task. The device follows an eye-in-hand, IBLM visual servoing configuration with a single camera and vibrotactile feedback to the user to direct guided tracking during the reaching task. We present a model for the system that employs a hybrid control scheme based on a Discrete Event System (DES) approach. This approach avoids significant problems inherent in the competing classical control or conventional visual servoing models for upper limb movement found in the literature. The proposed hybrid model parameterizes the partitioning of the image state-space that produces a variable size targeting window for compensatory tracking in the reaching task. The partitioning is created through the positioning of hypersurface boundaries within the state space, which when crossed trigger events that cause DES-controller state transition that enable differing control laws. A set of metrics encompassing, accuracy (DD), precision (θe\theta_{e}), and overall tracking performance (ψ\psi) are also proposed to quantity system performance so that the effect of parameter variations and alternate controller configurations can be compared. To this end, a prototype called \texttt{aiReach} was constructed and experiments were conducted testing the functional use of the system and other supporting aspects of the system behaviour using participant volunteers. Results are presented validating the system design and demonstrating effective use of a two parameter partitioning scheme that utilizes a targeting window with additional hysteresis region to filtering perturbations due to natural proprioceptive limitations for precise control of upper limb movement. Results from the experiments show that accuracy performance increased with the use of the dual parameter hysteresis target window model (0.91≤D≤10.91 \leq D \leq 1, μ(D)=0.9644\mu(D)=0.9644, σ(D)=0.0172\sigma(D)=0.0172) over the single parameter fixed window model (0.82≤D≤0.980.82 \leq D \leq 0.98, μ(D)=0.9205\mu(D)=0.9205, σ(D)=0.0297\sigma(D)=0.0297) while the precision metric, θe\theta_{e}, remained relatively unchanged. In addition, the overall tracking performance metric produces scores which correctly rank the performance of the guided reaching tasks form most difficult to easiest
    • …
    corecore