2,301 research outputs found

    Gazing at the Solar System: Capturing the Evolution of Dunes, Faults, Volcanoes, and Ice from Space

    Get PDF
    Gazing imaging holds promise for improved understanding of surface characteristics and processes of Earth and solar system bodies. Evolution of earthquake fault zones, migration of sand dunes, and retreat of ice masses can be understood by observing changing features over time. To gaze or stare means to look steadily, intently, and with fixed attention, offering the ability to probe the characteristics of a target deeply, allowing retrieval of 3D structure and changes on fine and coarse scales. Observing surface reflectance and 3D structure from multiple perspectives allows for a more complete view of a surface than conventional remote imaging. A gaze from low Earth orbit (LEO) could last several minutes allowing for video capture of dynamic processes. Repeat passes enable monitoring time scales of days to years. Numerous vantage points are available during a gaze (Figure 1). Features in the scene are projected into each image frame enabling the recovery of dense 3D structure. The recovery is robust to errors in the spacecraft position and attitude knowledge, because features are from different perspectives. The combination of a varying look angle and the solar illumination allows recovering texture and reflectance properties and permits the separation of atmospheric effects. Applications are numerous and diverse, including, for example, glacier and ice sheet flux, sand dune migration, geohazards from earthquakes, volcanoes, landslides, rivers and floods, animal migrations, ecosystem changes, geysers on Enceladus, or ice structure on Europa. The Keck Institute for Space Studies (KISS) hosted a workshop in June of 2014 to explore opportunities and challenges of gazing imaging. The goals of the workshop were to develop and discuss the broad scientific questions that can be addressed using spaceborne gazing, specific types of targets and applications, the resolution and spectral bands needed to achieve the science objectives, and possible instrument configurations for future missions. The workshop participants found that gazing imaging offers the ability to measure morphology, composition, and reflectance simultaneously and to measure their variability over time. Gazing imaging can be applied to better understand the consequences of climate change and natural hazards processes, through the study of continuous and episodic processes in both domains

    Probabilistic ToF and Stereo Data Fusion Based on Mixed Pixel Measurement Models

    Get PDF
    This paper proposes a method for fusing data acquired by a ToF camera and a stereo pair based on a model for depth measurement by ToF cameras which accounts also for depth discontinuity artifacts due to the mixed pixel effect. Such model is exploited within both a ML and a MAP-MRF frameworks for ToF and stereo data fusion. The proposed MAP-MRF framework is characterized by site-dependent range values, a rather important feature since it can be used both to improve the accuracy and to decrease the computational complexity of standard MAP-MRF approaches. This paper, in order to optimize the site dependent global cost function characteristic of the proposed MAP-MRF approach, also introduces an extension to Loopy Belief Propagation which can be used in other contexts. Experimental data validate the proposed ToF measurements model and the effectiveness of the proposed fusion techniques

    Measurement of Sea Wave Spatial Spectra from High- Resolution Optical Aerospace Imagery

    Get PDF
    The chapter is devoted to the development of methods for remote measurement of spatial spectra of waves arising on marine and ocean surface. It is shown that in most natural conditions of optical image formation, a nonlinear modulation of the brightness field occurs by slopes of water surface elements. Methods for reconstructing the spectra of surface waves from optical image spectra with allowance for such modulation are proposed. The methods are based on the numerical simulation of water surface taking into account wave formation conditions and conditions of light entering the sea surface from the upper and lower hemispheres. Using the results of numerical simulation, special operators are built to retrieve wave spectra from the spectra of aerospace images. These retrieving operators are presented in the form of analytical expressions, depending on the sets of parameters, which are determined by the conditions for the formation of images. The results of experimental studies of the sea wave spectra in various water areas using satellite optical images of high spatial resolution are presented. In the experimental studies, the spatial spectral characteristics of sea waves estimated from remote sensing data were compared with the corresponding characteristics measured by contact assets under controlled conditions

    Towards Scalable Multi-View Reconstruction of Geometry and Materials

    Full text link
    In this paper, we propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes that exceed object-scale and hence cannot be captured with stationary light stages. The input are high-resolution RGB-D images captured by a mobile, hand-held capture system with point lights for active illumination. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. To facilitate scalability to large numbers of observation views and optimization variables, we introduce a distributed optimization algorithm that reconstructs 2.5D keyframe-based representations of the scene. A novel multi-view consistency regularizer effectively synchronizes neighboring keyframes such that the local optimization results allow for seamless integration into a globally consistent 3D model. We provide a study on the importance of each component in our formulation and show that our method compares favorably to baselines. We further demonstrate that our method accurately reconstructs various objects and materials and allows for expansion to spatially larger scenes. We believe that this work represents a significant step towards making geometry and material estimation from hand-held scanners scalable

    Gazing at the Solar System: Capturing the Evolution of Dunes, Faults, Volcanoes, and Ice from Space

    Get PDF
    Gazing imaging holds promise for improved understanding of surface characteristics and processes of Earth and solar system bodies. Evolution of earthquake fault zones, migration of sand dunes, and retreat of ice masses can be understood by observing changing features over time. To gaze or stare means to look steadily, intently, and with fixed attention, offering the ability to probe the characteristics of a target deeply, allowing retrieval of 3D structure and changes on fine and coarse scales. Observing surface reflectance and 3D structure from multiple perspectives allows for a more complete view of a surface than conventional remote imaging. A gaze from low Earth orbit (LEO) could last several minutes allowing for video capture of dynamic processes. Repeat passes enable monitoring time scales of days to years. Numerous vantage points are available during a gaze (Figure 1). Features in the scene are projected into each image frame enabling the recovery of dense 3D structure. The recovery is robust to errors in the spacecraft position and attitude knowledge, because features are from different perspectives. The combination of a varying look angle and the solar illumination allows recovering texture and reflectance properties and permits the separation of atmospheric effects. Applications are numerous and diverse, including, for example, glacier and ice sheet flux, sand dune migration, geohazards from earthquakes, volcanoes, landslides, rivers and floods, animal migrations, ecosystem changes, geysers on Enceladus, or ice structure on Europa. The Keck Institute for Space Studies (KISS) hosted a workshop in June of 2014 to explore opportunities and challenges of gazing imaging. The goals of the workshop were to develop and discuss the broad scientific questions that can be addressed using spaceborne gazing, specific types of targets and applications, the resolution and spectral bands needed to achieve the science objectives, and possible instrument configurations for future missions. The workshop participants found that gazing imaging offers the ability to measure morphology, composition, and reflectance simultaneously and to measure their variability over time. Gazing imaging can be applied to better understand the consequences of climate change and natural hazards processes, through the study of continuous and episodic processes in both domains

    Accurate depth from defocus estimation with video-rate implementation

    Get PDF
    The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates

    Bimanual robotic manipulation based on potential fields

    Get PDF
    openDual manipulation is a natural skill for humans but not so easy to achieve for a robot. The presence of two end effectors implies the need to consider the temporal and spatial constraints they generate while moving together. Consequently, synchronization between the arms is required to perform coordinated actions (e.g., lifting a box) and to avoid self-collision between the manipulators. Moreover, the challenges increase in dynamic environments, where the arms must be able to respond quickly to changes in the position of obstacles or target objects. To meet these demands, approaches like optimization-based motion planners and imitation learning can be employed but they have limitations such as high computational costs, or the need to create a large dataset. Sampling-based motion planners can be a viable solution thanks to their speed and low computational costs but, in their basic implementation, the environment is assumed to be static. An alternative approach relies on improved Artificial Potential Fields (APF). They are intuitive, with low computational, and, most importantly, can be used in dynamic environments. However, they do not have the precision to perform manipulation actions, and dynamic goals are not considered. This thesis proposes a system for bimanual robotic manipulation based on a combination of improved Artificial Potential Fields (APF) and the sampling-based motion planner RRTConnect. The basic idea is to use improved APF to bring the end effectors near their target goal while reacting to changes in the surrounding environment. Only then RRTConnect is triggered to perform the manipulation task. In this way, it is possible to take advantage of the strengths of both methods. To improve this system APF have been extended to consider dynamic goals and a self-collision avoidance system has been developed. The conducted experiments demonstrate that the proposed system adeptly responds to changes in the position of obstacles and target objects. Moreover, the self-collision avoidance system enables faster dual manipulation routines compared to sequential arm movements
    • …
    corecore