52,244 research outputs found
Overcoming the Challenges Associated with Image-based Mapping of Small Bodies in Preparation for the OSIRIS-REx Mission to (101955) Bennu
The OSIRIS-REx Asteroid Sample Return Mission is the third mission in NASA's
New Frontiers Program and is the first U.S. mission to return samples from an
asteroid to Earth. The most important decision ahead of the OSIRIS-REx team is
the selection of a prime sample-site on the surface of asteroid (101955) Bennu.
Mission success hinges on identifying a site that is safe and has regolith that
can readily be ingested by the spacecraft's sampling mechanism. To inform this
mission-critical decision, the surface of Bennu is mapped using the OSIRIS-REx
Camera Suite and the images are used to develop several foundational data
products. Acquiring the necessary inputs to these data products requires
observational strategies that are defined specifically to overcome the
challenges associated with mapping a small irregular body. We present these
strategies in the context of assessing candidate sample-sites at Bennu
according to a framework of decisions regarding the relative safety,
sampleability, and scientific value across the asteroid's surface. To create
data products that aid these assessments, we describe the best practices
developed by the OSIRIS-REx team for image-based mapping of irregular small
bodies. We emphasize the importance of using 3D shape models and the ability to
work in body-fixed rectangular coordinates when dealing with planetary surfaces
that cannot be uniquely addressed by body-fixed latitude and longitude.Comment: 31 pages, 10 figures, 2 table
Self-correction of 3D reconstruction from multi-view stereo images
We present a self-correction approach to improving the
3D reconstruction of a multi-view 3D photogrammetry system.
The self-correction approach has been able to repair
the reconstructed 3D surface damaged by depth discontinuities.
Due to self-occlusion, multi-view range images
have to be acquired and integrated into a watertight nonredundant
mesh model in order to cover the extended surface
of an imaged object. The integrated surface often suffers
from “dent” artifacts produced by depth discontinuities
in the multi-view range images. In this paper we propose
a novel approach to correcting the 3D integrated surface
such that the dent artifacts can be repaired automatically.
We show examples of 3D reconstruction to demonstrate the
improvement that can be achieved by the self-correction
approach. This self-correction approach can be extended
to integrate range images obtained from alternative range
capture devices
Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor
Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level.
Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques.
Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use
Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate.
Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation
The Cyborg Astrobiologist: Testing a Novelty-Detection Algorithm on Two Mobile Exploration Systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah
(ABRIDGED) In previous work, two platforms have been developed for testing
computer-vision algorithms for robotic planetary exploration (McGuire et al.
2004b,2005; Bartolo et al. 2007). The wearable-computer platform has been
tested at geological and astrobiological field sites in Spain (Rivas
Vaciamadrid and Riba de Santiuste), and the phone-camera has been tested at a
geological field site in Malta. In this work, we (i) apply a Hopfield
neural-network algorithm for novelty detection based upon color, (ii) integrate
a field-capable digital microscope on the wearable computer platform, (iii)
test this novelty detection with the digital microscope at Rivas Vaciamadrid,
(iv) develop a Bluetooth communication mode for the phone-camera platform, in
order to allow access to a mobile processing computer at the field sites, and
(v) test the novelty detection on the Bluetooth-enabled phone-camera connected
to a netbook computer at the Mars Desert Research Station in Utah. This systems
engineering and field testing have together allowed us to develop a real-time
computer-vision system that is capable, for example, of identifying lichens as
novel within a series of images acquired in semi-arid desert environments. We
acquired sequences of images of geologic outcrops in Utah and Spain consisting
of various rock types and colors to test this algorithm. The algorithm robustly
recognized previously-observed units by their color, while requiring only a
single image or a few images to learn colors as familiar, demonstrating its
fast learning capability.Comment: 28 pages, 12 figures, accepted for publication in the International
Journal of Astrobiolog
- …