902 research outputs found

    A novel haptic model and environment for maxillofacial surgical operation planning and manipulation

    Get PDF
    This paper presents a practical method and a new haptic model to support manipulations of bones and their segments during the planning of a surgical operation in a virtual environment using a haptic interface. To perform an effective dental surgery it is important to have all the operation related information of the patient available beforehand in order to plan the operation and avoid any complications. A haptic interface with a virtual and accurate patient model to support the planning of bone cuts is therefore critical, useful and necessary for the surgeons. The system proposed uses DICOM images taken from a digital tomography scanner and creates a mesh model of the filtered skull, from which the jaw bone can be isolated for further use. A novel solution for cutting the bones has been developed and it uses the haptic tool to determine and define the bone-cutting plane in the bone, and this new approach creates three new meshes of the original model. Using this approach the computational power is optimized and a real time feedback can be achieved during all bone manipulations. During the movement of the mesh cutting, a novel friction profile is predefined in the haptical system to simulate the force feedback feel of different densities in the bone

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    A comparison of three materials used for tactile symbols to communicate colour to children and young people with visual impairments

    Get PDF
    A series of 14 tactile symbols were developed to represent different colours and shades for children and young people who are blind or have visual impairment. A study compared three different methods for representing the symbols: (1) embroidered thread, (2) heated ‘swell’ paper, and (3) representation in plastic using Additive Manufacturing (AM; three-dimensional printing). The results show that for all three materials, the recognition of particular symbols varied between 2.40 and 3.95 s. The average times for the three materials across all colours were 2.26 s for AM material, 3.20 s for swell paper, and 4.03 s for embroidered symbols. These findings can be explained by the fact that the AM material (polylactide) is firmer and more easily perceived tactually than the other two materials. While AM plastic offers a potentially useful means to communicate colours for appropriate objects, traditional media are still important in certain contexts

    Master of Science

    Get PDF
    thesisHaptic interactions with smartphones are generally restricted to vibrotactile feedback that offers limited distinction between delivered tactile cues. The lateral movement of a small, high-friction contactor at the fingerpad can be used to induce skin stretch tangent to the skin's surface. This method has been demonstrated to reliably communicate four cardinal directions with 1 mm translations of the device's contactor, when finger motion is properly restrained. While earlier research has used a thimble to restrain the finger, this interface has been made portable by incorporating a simple conical hole as a finger restraint. An initial portable device design used RC hobby servos and the conical hole finger restraint, but the shape and size of this portable device wasn't compatible with smartphone form factors. This design also had significant compliance and backlash that must be compensated for with additional control schemes. In contrast, this thesis presents the design, fabrication, and testing of a low-profile skin-stretch display (LPSSD) with a novel actuation design for delivering complex tactile cues with minimal backlash or hysteresis of the skin contactor or "tactor." This flatter mechanism features embedded sensors for fingertip cursor control and selection. This device's nonlinear tactor motions are compensated for using table look-up and high-frequency open-loop control to create direction cues with 1.8 mm radial tactor displacements in 16 directions (distributed evenly every 22.5°) before returning to center. Two LPSSDs are incorporated into a smartphone peripheral and used in single-handed and bimanual tests to identify 16 directions. Users also participated in "relative" identification tests where they were first provided a reference direction cue in the forward/north direction followed by the cue direction that they were to identify. Tests were performed with the user's thumbs oriented in the forward direction and with thumbs angled inward slightly, similar to the angledthumb orientation console game controllers. Users are found to have increased performance with an angled-thumb orientation. They performed similarly when stimuli were delivered to their right or left thumbs, and had significantly better performance judging direction cues with both thumbs simultaneously. Participants also performed slightly better in identifying the relative direction cues than the absolute

    Multimodal Accessibility of Documents

    Get PDF

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2 and reports on five research projects.National Institutes of Health Contract 2 R01 DC00117National Institutes of Health Contract 1 R01 DC02032National Institutes of Health Contract 2 P01 DC00361National Institutes of Health Contract N01 DC22402National Institutes of Health Grant R01-DC001001National Institutes of Health Grant R01-DC00270National Institutes of Health Grant 5 R01 DC00126National Institutes of Health Grant R29-DC00625U.S. Navy - Office of Naval Research Grant N00014-88-K-0604U.S. Navy - Office of Naval Research Grant N00014-91-J-1454U.S. Navy - Office of Naval Research Grant N00014-92-J-1814U.S. Navy - Naval Air Warfare Center Training Systems Division Contract N61339-94-C-0087U.S. Navy - Naval Air Warfare Center Training System Division Contract N61339-93-C-0055U.S. Navy - Office of Naval Research Grant N00014-93-1-1198National Aeronautics and Space Administration/Ames Research Center Grant NCC 2-77

    Combining physical constraints with geometric constraint-based modeling for virtual assembly

    Get PDF
    The research presented in this dissertation aims to create a virtual assembly environment capable of simulating the constant and subtle interactions (hand-part, part-part) that occur during manual assembly, and providing appropriate feedback to the user in real-time. A virtual assembly system called SHARP System for Haptic Assembly and Realistic Prototyping is created, which utilizes simulated physical constraints for part placement during assembly.;The first approach taken in this research attempt utilized Voxmap Point Shell (VPS) software for implementing collision detection and physics-based modeling in SHARP. A volumetric approach, where complex CAD models were represented by numerous small cubic-voxel elements was used to obtain fast physics update rates (500--1000 Hz). A novel dual-handed haptic interface was developed and integrated into the system allowing the user to simultaneously manipulate parts with both hands. However, coarse model approximations used for collision detection and physics-based modeling only allowed assembly when minimum clearance was limited to ∼8-10%.;To provide a solution to the low clearance assembly problem, the second effort focused on importing accurate parametric CAD data (B-Rep) models into SHARP. These accurate B-Rep representations are used for collision detection as well as for simulating physical contacts more accurately. A new hybrid approach is presented, which combines the simulated physical constraints with geometric constraints which can be defined at runtime. Different case studies are used to identify the suitable combination of methods (collision detection, physical constraints, geometric constraints) capable of best simulating intricate interactions and environment behavior during manual assembly. An innovative automatic constraint recognition algorithm is created and integrated into SHARP. The feature-based approach utilized for the algorithm design, facilitates faster identification of potential geometric constraints that need to be defined. This approach results in optimized system performance while providing a more natural user experience for assembly

    Trajectory Correction for Visually Impaired Athletes on 100 m Paralympic Races

    Get PDF
    peer reviewedThe paper reports an experimental study that was carried out in Manaus (Amazonas, Brazil) with the participation of eight visually impaired athletes on 100 m sprint Paralympic races. A trajectory correction system was used, based on an accelerometer and a gyroscope for motion detection, an algorithm to track the athlete’s trajectories and a haptic actuator for the interaction with the athletes. The experimental results show the relevance in the use of this type of systems in Paralympic 100 m races for visually impaired athletes, mainly with the purpose of increasing their autonomy by mimicking their guides.10. Reduced inequalitie

    Optimizing The Design Of Multimodal User Interfaces

    Get PDF
    Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator\u27s information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments

    Tactile perception of spatially distributed vibratory stimuli on the fingerpad

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.Includes bibliographical references (p. 86-87).Using a pin-array type tactile display as a stimulator of the finger pad, a psychophysical study was conducted on the vibrotactile perception. The passive touch with vibratory stimuli in the low frequency could be an alternative of the active touch for the presented stimuli: polygons, round shapes and gratings. As for the effect of frequency on the texture discrimination, the high correct answer proportions corresponded to the most sensitive frequency ranges of each mechanoreceptor. The spatial acuity decreased as the frequency of the stimuli increased when the stimuli presented by the equal number of contactors. As an analogy between color vision and tactile perception, a spatial configuration of the multiple contactors was proposed to deliver the intermediate pitch using the compound waveform defined as a sinusoidal stimulus which was presented by four contactors vibrating with 30Hz and 240Hz. The subjects felt qualitatively different the compound waveform and the pure-tone.(cont.) When the high frequency component had 3 times the intensity of the other component, the perceived frequency of the compound waveform was about 120Hz which was much lower than the component frequency 240Hz. The experimental results were explained by the hypothesis of a ratio code, neural mechanism signaling the frequency of vibratory stimuli based on the ratio of the one-to-one activated population of mechanoreceptors. In addition, the intensity of the components also affected the overall perceived frequency.by Minseung Ahn.S.M
    • …
    corecore