26,925 research outputs found

    Autonomous Robot Navigation with Rich Information Mapping in Nuclear Storage Environments

    Full text link
    This paper presents our approach to develop a method for an unmanned ground vehicle (UGV) to perform inspection tasks in nuclear environments using rich information maps. To reduce inspectors' exposure to elevated radiation levels, an autonomous navigation framework for the UGV has been developed to perform routine inspections such as counting containers, recording their ID tags and performing gamma measurements on some of them. In order to achieve autonomy, a rich information map is generated which includes not only the 2D global cost map consisting of obstacle locations for path planning, but also the location and orientation information for the objects of interest from the inspector's perspective. The UGV's autonomy framework utilizes this information to prioritize locations to navigate to perform the inspections. In this paper, we present our method of generating this rich information map, originally developed to meet the requirements of the International Atomic Energy Agency (IAEA) Robotics Challenge. We demonstrate the performance of our method in a simulated testbed environment containing uranium hexafluoride (UF6) storage container mock ups

    An optically actuated surface scanning probe

    Get PDF
    We demonstrate the use of an extended, optically trapped probe that is capable of imaging surface topography with nanometre precision, whilst applying ultra-low, femto-Newton sized forces. This degree of precision and sensitivity is acquired through three distinct strategies. First, the probe itself is shaped in such a way as to soften the trap along the sensing axis and stiffen it in transverse directions. Next, these characteristics are enhanced by selectively position clamping independent motions of the probe. Finally, force clamping is used to refine the surface contact response. Detailed analyses are presented for each of these mechanisms. To test our sensor, we scan it laterally over a calibration sample consisting of a series of graduated steps, and demonstrate a height resolution of ∼ 11 nm. Using equipartition theory, we estimate that an average force of only ∼ 140 fN is exerted on the sample during the scan, making this technique ideal for the investigation of delicate biological samples

    Quantifying Cross-scatter Contamination in Biplane Fluoroscopy Motion Analysis Systems

    Get PDF
    Biplane fluoroscopy is used for dynamic in vivo three-dimensional motion analysis of various joints of the body. Cross-scatter between the two fluoroscopy systems may limit tracking accuracy. This study measured the magnitude and effects of cross-scatter in biplane fluoroscopic images. Four cylindrical phantoms of 4-, 6-, 8-, and 10-in. diameter were imaged at varying kVp levels to determine the cross-scatter fraction and contrast-to-noise ratio (CNR). Monte Carlo simulations quantified the effect of the gantry angle on the cross-scatter fraction. A cadaver foot with implanted beads was also imaged. The effect of cross-scatter on marker-based tracking accuracy was investigated. Results demonstrated that the cross-scatter fraction varied from 0.15 for the 4-in. cylinder to 0.89 for the 10-in. cylinder when averaged across kVp. The average change in CNR due to cross-scatter ranged from 5% to 36% CNR decreases for the 4- and 10-in. cylinders, respectively. In simulations, the cross-scatter fraction increased with the gantry angle for the 8- and 10-in. cylinders. Cross-scatter significantly increased static-tracking error by 15%, 25%, and 38% for the 6-, 8-, and 10-in. phantoms, respectively, with no significant effect for the foot specimen. The results demonstrated submillimeter marker-based tracking for a range of phantom sizes, despite cross-scatter degradation

    Reliable vision-guided grasping

    Get PDF
    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system

    Embodied Precision : Intranasal Oxytocin Modulates Multisensory Integration

    Get PDF
    © 2018 Massachusetts Institute of Technology.Multisensory integration processes are fundamental to our sense of self as embodied beings. Bodily illusions, such as the rubber hand illusion (RHI) and the size-weight illusion (SWI), allow us to investigate how the brain resolves conflicting multisensory evidence during perceptual inference in relation to different facets of body representation. In the RHI, synchronous tactile stimulation of a participant's hidden hand and a visible rubber hand creates illusory body ownership; in the SWI, the perceived size of the body can modulate the estimated weight of external objects. According to Bayesian models, such illusions arise as an attempt to explain the causes of multisensory perception and may reflect the attenuation of somatosensory precision, which is required to resolve perceptual hypotheses about conflicting multisensory input. Recent hypotheses propose that the precision of sensorimotor representations is determined by modulators of synaptic gain, like dopamine, acetylcholine, and oxytocin. However, these neuromodulatory hypotheses have not been tested in the context of embodied multisensory integration. The present, double-blind, placebo-controlled, crossover study ( N = 41 healthy volunteers) aimed to investigate the effect of intranasal oxytocin (IN-OT) on multisensory integration processes, tested by means of the RHI and the SWI. Results showed that IN-OT enhanced the subjective feeling of ownership in the RHI, only when synchronous tactile stimulation was involved. Furthermore, IN-OT increased an embodied version of the SWI (quantified as estimation error during a weight estimation task). These findings suggest that oxytocin might modulate processes of visuotactile multisensory integration by increasing the precision of top-down signals against bottom-up sensory input.Peer reviewedFinal Accepted Versio

    The Performance of MLEM for Dynamic Imaging From Simulated Few-View, Multi-Pinhole SPECT

    Get PDF
    Stationary small-animal SPECT systems are being developed for rapid dynamic imaging from limited angular views. This work quantified, through simulations, the performance of Maximum Likelihood Expectation Maximization (MLEM) for reconstructing a time-activity curve (TAC) with uptake duration of a few seconds from a stationary, three-camera multi-pinhole SPECT system. The study also quantified the benefits of a heuristic method of initializing the reconstruction with a prior image reconstructed from a conventional number of views, for example from data acquired during the late-study portion of the dynamic TAC. We refer to MLEM reconstruction initialized by a prior-image initial guess (IG) as MLEMig. The effect of the prior-image initial guess on the depiction of contrast between two regions of a static phantom was quantified over a range of angular sampling schemes. A TAC was modeled from the experimentally measured uptake of 99mTc-hexamethylpropyleneamine oxime (HMPAO) in the rat lung. The resulting time series of simulated images was quantitatively analyzed with respect to the accuracy of the estimated exponential washin and washout parameters. In both static and dynamic phantom studies, the prior-image initial guess improved the spatial depiction of the phantom, for example improved definition of the cylinder boundaries and more accurate quantification of relative contrast between cylinders. For example in the dynamic study, there was ~ 50% error in relative contrast for MLEM reconstructions compared to ~ 25-30% error for MLEMig. In the static phantom study, the benefits of the initial guess decreased as the number of views increased. The prior-image initial guess introduced an additive offset in the reconstructed dynamic images, likely due to biases introduced by the prior image. MLEM initialized with a uniform initial guess yielded images that faithfully reproduced the time dependence of the simulated TAC; there were no s- atistically significant differences in the mean exponential washin/washout parameters estimated from MLEM reconstructions compared to the true values. Washout parameters estimated from MLEMig reconstructions did not differ significantly from the true values, however the estimated washin parameter differed significantly from the true value in some cases. Overall, MLEM reconstruction from few views and a uniform initial guess accurately quantified the time dependance of the TAC while introducing errors in the spatial depiction of the object. Initializing the reconstruction with a late-study initial guess improved spatial accuracy while decreasing temporal accuracy in some cases
    • …
    corecore