1,444 research outputs found

    Pivot calibration concept for sensor attached mobile c-arms

    Full text link
    Medical augmented reality has been actively studied for decades and many methods have been proposed torevolutionize clinical procedures. One example is the camera augmented mobile C-arm (CAMC), which providesa real-time video augmentation onto medical images by rigidly mounting and calibrating a camera to the imagingdevice. Since then, several CAMC variations have been suggested by calibrating 2D/3D cameras, trackers, andmore recently a Microsoft HoloLens to the C-arm. Different calibration methods have been applied to establishthe correspondence between the rigidly attached sensor and the imaging device. A crucial step for these methodsis the acquisition of X-Ray images or 3D reconstruction volumes; therefore, requiring the emission of ionizingradiation. In this work, we analyze the mechanical motion of the device and propose an alternatative methodto calibrate sensors to the C-arm without emitting any radiation. Given a sensor is rigidly attached to thedevice, we introduce an extended pivot calibration concept to compute the fixed translation from the sensor tothe C-arm rotation center. The fixed relationship between the sensor and rotation center can be formulated as apivot calibration problem with the pivot point moving on a locus. Our method exploits the rigid C-arm motiondescribing a Torus surface to solve this calibration problem. We explain the geometry of the C-arm motion andits relation to the attached sensor, propose a calibration algorithm and show its robustness against noise, as wellas trajectory and observed pose density by computer simulations. We discuss this geometric-based formulationand its potential extensions to different C-arm applications.Comment: Accepted for Image-Guided Procedures, Robotic Interventions, and Modeling 2020, Houston, TX, US

    Design and Implementation of an Interactive Animatronic System for Guest Response Analysis

    Get PDF
    In theme park based entertainment applications, there is a need for interactive, autonomous animatronic systems to create engaging and compelling experiences for the guests. The animatronic figures must identify the guests and recognize their status in dynamic interactions for enhanced acceptance and effectiveness as socially interactive agents, in the general framework of human-robot interactions. The design and implementation of an interactive, autonomous animatronic system in form of a tabletop dragon and the comparisons of guest responses in its passive and interactive modes are presented in this work. The dragon capabilities include a four degrees-of-freedom head, moving wings, tail, jaw, blinking eyes and sound effects. Human identification, using a depth camera (Carmine from PrimeSense), an open-source middleware (NITE from OpenNI), Java-based Processing and an Arduino microcontroller, has been implemented into the system in order to track a guest or guests, within the field of view of the camera. The details of design and construction of the dragon model, algorithm development for interactive autonomous behavior using a vision system, the experimental setup and implementation results under different conditions are presented. Guest experiences with the dragon operating in passive and interactive configurations have been compared both quantitatively and qualitatively through surveys and observations, for different age groups, from elementary school children to college students. Statistical significance of the survey results are presented along with a discussion on the scope of further work

    SAFER: Search and Find Emergency Rover

    Get PDF
    When disaster strikes and causes a structure to collapse, it poses a unique challenge to search and rescue teams as they assess the situation and search for survivors. Currently there are very few tools that can be used by these teams to aid them in gathering important information about the situation that allow members to stay at a safe distance. SAFER, Search and Find Emergency Rover, is an unmanned, remotely operated vehicle that can provide early reconnaissance to search and rescue teams so they may have more information to prepare themselves for the dangers that lay inside the wreckage. Over the past year, this team has restored a bare, non-operational chassis inherited from Roverwerx 2012 into a rugged and operational rover with increased functionality and reliability. SAFER uses a 360-degree camera to deliver real time visual reconnaissance to the operator who can remain safely stationed on the outskirts of the disaster. With strong drive motors providing enough torque to traverse steep obstacles and enough power to travel at up to 3 ft/s, SAFER can cover ground quickly and effectively over its 1-3 hour battery life, maximizing reconnaissance for the team. Additionally, SAFER contains 3 flashing beacons that can be dropped by the operator in the event a victim is found so that when team members do enter the scene they may easily locate victims. In the future, other teams may wish to improve upon this iteration by adding thermal imaging, air quality sensors, and potentially a robotic arm with a camera that can see in spaces too small for the entire rover to enter

    Pedaling and Braking Forces and Their Effect on Suspension Performance

    Get PDF
    The goal of this project is to produce a quantifiable relationship between pedaling and braking forces and suspension behavior. The test results will serve as a benchmark, against which other suspension designs can be compared in an objective, numerical manner. This report details the work completed towards determining this relationship and is primarily focused on design development of the data acquisition system and analysis of braking forces, pedaling forces and suspension response. A 2011 Specialized Stumpjumper FSR Expert Evo was used as the development platform. The system will be used to measure various rear suspension designs for comparison

    Some NASA contributions to human factors engineering: A survey

    Get PDF
    This survey presents the NASA contributions to the state of the art of human factors engineering, and indicates that these contributions have a variety of applications to nonaerospace activities. Emphasis is placed on contributions relative to man's sensory, motor, decisionmaking, and cognitive behavior and on applications that advance human factors technology

    Current Advances in Internet of Underground Things

    Get PDF
    The latest developments in Internet of Underground Things are covered in this chapter. First, the IOUT Architecture is discussed followed by the explanation of the challenges being faced in this paradigm. Moreover, a comprehensive coverage of the different IOUT components is presented that includes communications, sensing, and system integration with the cloud. An in-depth coverage of the applications of the IOUT in various disciplines is also surveyed. These applications include areas such as decision agriculture, pipeline monitoring, border control, and oil wells

    Decision Agriculture

    Get PDF
    In this chapter, the latest developments in the field of decision agriculture are discussed. The practice of management zones in digital agriculture is described for efficient and smart faming. Accordingly, the methodology for delineating management zones is presented. Modeling of decision support systems is explained along with discussion of the issues and challenges in this area. Moreover, the precision agriculture technology is also considered. Moreover, the chapter surveys the state of the decision agriculture technologies in the countries such as Bulgaria, Denmark, France, Israel, Malaysia, Pakistan, United Kingdom, Ukraine, and Sweden. Finally, different field factors such as GPS accuracy and crop growth are also analyzed

    Internet of underground things in precision agriculture: Architecture and technology aspects

    Get PDF
    The projected increases in World population and need for food have recently motivated adoption of information technology solutions in crop fields within precision agriculture approaches. Internet Of Underground Things (IOUT), which consists of sensors and communication devices, partly or completely buried underground for real-time soil sensing and monitoring, emerge from this need. This new paradigm facilitates seamless integration of underground sensors, machinery, and irrigation systems with the complex social network of growers, agronomists, crop consultants, and advisors. In this paper, state-of-the-art communication architectures are reviewed, and underlying sensing technology and communication mechanisms for IOUT are presented. Moreover, recent advances in the theory and applications of wireless underground communication are also reported. Finally, major challenges in IOUT design and implementation are identified

    Concept and Design of a Hand-held Mobile Robot System for Craniotomy

    Get PDF
    This work demonstrates a highly intuitive robot for Surgical Craniotomy Procedures. Utilising a wheeled hand-held robot, to navigate the Craniotomy Drill over a patient\u27s skull, the system does not remove the surgeons from the procedure, but supports them during this critical phase of the operation

    Docking Haptics: Extending the Reach of Haptics by Dynamic Combinations of Grounded and Worn Devices

    Full text link
    Grounded haptic devices can provide a variety of forces but have limited working volumes. Wearable haptic devices operate over a large volume but are relatively restricted in the types of stimuli they can generate. We propose the concept of docking haptics, in which different types of haptic devices are dynamically docked at run time. This creates a hybrid system, where the potential feedback depends on the user's location. We show a prototype docking haptic workspace, combining a grounded six degree-of-freedom force feedback arm with a hand exoskeleton. We are able to create the sensation of weight on the hand when it is within reach of the grounded device, but away from the grounded device, hand-referenced force feedback is still available. A user study demonstrates that users can successfully discriminate weight when using docking haptics, but not with the exoskeleton alone. Such hybrid systems would be able to change configuration further, for example docking two grounded devices to a hand in order to deliver twice the force, or extend the working volume. We suggest that the docking haptics concept can thus extend the practical utility of haptics in user interfaces
    • …
    corecore