33,022 research outputs found

    Improved Fourier Mellin Invariant for Robust Rotation Estimation with Omni-cameras

    Full text link
    Spectral methods such as the improved Fourier Mellin Invariant (iFMI) transform have proved faster, more robust and accurate than feature based methods on image registration. However, iFMI is restricted to work only when the camera moves in 2D space and has not been applied on omni-cameras images so far. In this work, we extend the iFMI method and apply a motion model to estimate an omni-camera's pose when it moves in 3D space. This is particularly useful in field robotics applications to get a rapid and comprehensive view of unstructured environments, and to estimate robustly the robot pose. In the experiment section, we compared the extended iFMI method against ORB and AKAZE feature based approaches on three datasets showing different type of environments: office, lawn and urban scenery (MPI-omni dataset). The results show that our method boosts the accuracy of the robot pose estimation two to four times with respect to the feature registration techniques, while offering lower processing times. Furthermore, the iFMI approach presents the best performance against motion blur typically present in mobile robotics.Comment: 5 pages, 4 figures, 1 tabl

    A 3D Framework for Characterizing Microstructure Evolution of Li-Ion Batteries

    Get PDF
    Lithium-ion batteries are commonly found in many modern consumer devices, ranging from portable computers and mobile phones to hybrid- and fully-electric vehicles. While improving efficiencies and increasing reliabilities are of critical importance for increasing market adoption of the technology, research on these topics is, to date, largely restricted to empirical observations and computational simulations. In the present study, it is proposed to use the modern technique of X-ray microscopy to characterize a sample of commercial 18650 cylindrical Li-ion batteries in both their pristine and aged states. By coupling this approach with 3D and 4D data analysis techniques, the present study aimed to create a research framework for characterizing the microstructure evolution leading to capacity fade in a commercial battery. The results indicated the unique capabilities of the microscopy technique to observe the evolution of these batteries under aging conditions, successfully developing a workflow for future research studies

    Coordinates and maps of the Apollo 17 landing site

    Get PDF
    We carried out an extensive cartographic analysis of the Apollo 17 landing site and determined and mapped positions of the astronauts, their equipment, and lunar landmarks with accuracies of better than ±1 m in most cases. To determine coordinates in a lunar body‐fixed coordinate frame, we applied least squares (2‐D) network adjustments to angular measurements made in astronaut imagery (Hasselblad frames). The measured angular networks were accurately tied to lunar landmarks provided by a 0.5 m/pixel, controlled Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) orthomosaic of the entire Taurus‐Littrow Valley. Furthermore, by applying triangulation on measurements made in Hasselblad frames providing stereo views, we were able to relate individual instruments of the Apollo Lunar Surface Experiment Package (ALSEP) to specific features captured in LROC imagery and, also, to determine coordinates of astronaut equipment or other surface features not captured in the orbital images, for example, the deployed geophones and Explosive Packages (EPs) of the Lunar Seismic Profiling Experiment (LSPE) or the Lunar Roving Vehicle (LRV) at major sampling stops. Our results were integrated into a new LROC NAC‐based Apollo 17 Traverse Map and also used to generate a series of large‐scale maps of all nine traverse stations and of the ALSEP area. In addition, we provide crater measurements, profiles of the navigated traverse paths, and improved ranges of the sources and receivers of the active seismic experiment LSPE

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers

    Get PDF
    Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis (PIPPI), 201
    • 

    corecore