1,206 research outputs found
Recommended from our members
High-quality dense stereo vision for whole body imaging and obesity assessment
textThe prevalence of obesity has necessitated developing safe and convenient tools for timely assessing and monitoring this condition for a broad range of population. Three-dimensional (3D) body imaging has become a new mean for obesity assessment. Moreover, it generates body shape information that is meaningful for fitness, ergonomics, and personalized clothing. In the previous work of our lab, we developed a prototype active stereo vision system that demonstrated a potential to fulfill this goal. But the prototype required four computer projectors to cast artificial textures on the body which facilitate the stereo-matching on texture-deficient images (e.g., skin). This decreases the mobility of the system when used to collect a large population data. In addition, the resolution of the generated 3D~images is limited by both cameras and projectors available during the project. The study reported in this dissertation highlights our continued effort in improving the capability of 3Dbody imaging through simplified hardware for passive stereo and advanced computation techniques.
The system utilizes high-resolution single-lens reflex (SLR) cameras, which became widely available lately, and is configured in a two-stance design to image the front and back surfaces of a person. A total of eight cameras are used to form four pairs of stereo units. Each unit covers a quarter of the body surface. The stereo units are individually calibrated with a specific pattern to determine cameras' intrinsic and extrinsic parameters for stereo matching. The global orientation and position of each stereo unit within a common world coordinate system is calculated through a 3Dregistration step. The stereo calibration and 3Dregistration procedures do not need to be repeated for a deployed system if the cameras' relative positions have not changed. This property contributes to the portability of the system, and tremendously alleviates the maintenance task. The image acquisition time is around two seconds for a whole-body capture. The system works in an indoor environment with a moderate ambient light.
Advanced stereo computation algorithms are developed by taking advantage of high-resolution images and by tackling the ambiguity problem in stereo matching. A multi-scale, coarse-to-fine matching framework is proposed to match large-scale textures at a low resolution and refine the matched results over higher resolutions. This matching strategy reduces the complexity of the computation and avoids ambiguous matching at the native resolution. The pixel-to-pixel stereo matching algorithm follows a classic, four-step strategy which consists of matching cost computation, cost aggregation, disparity computation and disparity refinement.
The system performance has been evaluated on mannequins and human subjects in comparison with other measurement methods. It was found that the geometrical measurements from reconstructed 3Dbody models, including body circumferences and whole volume, are highly repeatable and consistent with manual and other instrumental measurements (CV 0.99). The agreement of percent body fat (%BF) estimation on human subjects between stereo and dual-energy X-ray absorptiometry (DEXA) was found to be improved over the previous active stereo system, and the limits of agreement with 95% confidence were reduced by half. Our achieved %BF estimation agreement is among the lowest ones of other comparative studies with commercialized air displacement plethysmography (ADP) and DEXA. In practice, %BF estimation through a two-component model is sensitive to body volume measurement, and the estimation of lung volume could be a source of variation. Protocols for this type of measurement should still be created with an awareness of this factor.Biomedical Engineerin
Gazing at the Solar System: Capturing the Evolution of Dunes, Faults, Volcanoes, and Ice from Space
Gazing imaging holds promise for improved understanding of surface
characteristics and processes of Earth and solar system bodies. Evolution of
earthquake fault zones, migration of
sand dunes, and retreat of ice masses
can be understood by observing
changing features over time.
To gaze or stare means to look
steadily, intently, and with fixed
attention, offering the ability to probe
the characteristics of a target deeply,
allowing retrieval of 3D structure and
changes on fine and coarse scales.
Observing surface reflectance and 3D
structure from multiple perspectives
allows for a more complete view of a
surface than conventional remote
imaging. A gaze from low Earth orbit
(LEO) could last several minutes
allowing for video capture of dynamic
processes. Repeat passes enable
monitoring time scales of days to years.
Numerous vantage points are available during a gaze (Figure 1). Features in
the scene are projected into each image frame enabling the recovery of dense
3D structure. The recovery is robust to errors in the spacecraft position and
attitude knowledge, because features are from different perspectives. The
combination of a varying look angle and the solar illumination allows recovering
texture and reflectance properties and permits the separation of atmospheric
effects. Applications are numerous and diverse, including, for example, glacier
and ice sheet flux, sand dune migration, geohazards from earthquakes,
volcanoes, landslides, rivers and floods, animal migrations, ecosystem changes,
geysers on Enceladus, or ice structure on Europa.
The Keck Institute for Space Studies (KISS) hosted a workshop in June of
2014 to explore opportunities and challenges of gazing imaging. The goals of the
workshop were to develop and discuss the broad scientific questions that can be
addressed using spaceborne gazing, specific types of targets and applications,
the resolution and spectral bands needed to achieve the science objectives, and
possible instrument configurations for future missions.
The workshop participants found that gazing imaging offers the ability to
measure morphology, composition, and reflectance simultaneously and to
measure their variability over time. Gazing imaging can be applied to better
understand the consequences of climate change and natural hazards processes,
through the study of continuous and episodic processes in both domains
Multi-modal dictionary learning for image separation with application in art investigation
In support of art investigation, we propose a new source separation method
that unmixes a single X-ray scan acquired from double-sided paintings. In this
problem, the X-ray signals to be separated have similar morphological
characteristics, which brings previous source separation methods to their
limits. Our solution is to use photographs taken from the front and back-side
of the panel to drive the separation process. The crux of our approach relies
on the coupling of the two imaging modalities (photographs and X-rays) using a
novel coupled dictionary learning framework able to capture both common and
disparate features across the modalities using parsimonious representations;
the common component models features shared by the multi-modal images, whereas
the innovation component captures modality-specific information. As such, our
model enables the formulation of appropriately regularized convex optimization
procedures that lead to the accurate separation of the X-rays. Our dictionary
learning framework can be tailored both to a single- and a multi-scale
framework, with the latter leading to a significant performance improvement.
Moreover, to improve further on the visual quality of the separated images, we
propose to train coupled dictionaries that ignore certain parts of the painting
corresponding to craquelure. Experimentation on synthetic and real data - taken
from digital acquisition of the Ghent Altarpiece (1432) - confirms the
superiority of our method against the state-of-the-art morphological component
analysis technique that uses either fixed or trained dictionaries to perform
image separation.Comment: submitted to IEEE Transactions on Images Processin
Pose Performance of LIDAR-Based Relative Navigation for Non-Cooperative Objects
Flash LIDAR is an important new sensing technology for relative navigation; these sensors have shown promising results during rendezvous and docking applications involving a cooperative vehicle. An area of recent interest is the application of this technology for pose estimation with non-cooperative client vehicles, in support of on-orbit satellite servicing activities and asteroid redirect missions. The capability for autonomous rendezvous with non-cooperative satellites will enable refueling and servicing of satellites (particularly those designed without servicing in mind), allowing these vehicles to continue operating rather than being retired. Rendezvous with an asteroid will give further insight to the origin of individual asteroids. This research investigates numerous issues surrounding pose performance using LIDAR. To begin analyzing the characteristics of the data produced by Flash LIDAR, simulated and laboratory testing have been completed. Observations of common asteroid materials were made with a surrogate LIDAR, characterizing the reflectivity of the materials. A custom Iterative Closest Point (ICP) algorithm was created to estimate the relative position and orientation of the LIDAR relative to the observed object. The performance of standardized pose estimation techniques (including ICP) has been examined using non-cooperative data as well as the characteristics of the materials that will potentially be observed during missions. For the hardware tests, a SwissRanger ToF camera was used as a surrogate Flash LIDAR
Development of inventory datasets through remote sensing and direct observation data for earthquake loss estimation
This report summarizes the lessons learnt in extracting exposure information for the three study sites, Thessaloniki, Vienna and Messina that were addressed in SYNER-G. Fine scale information on exposed elements that for SYNER-G include buildings, civil engineering works and population, is one of the variables used to quantify risk. Collecting data and creating exposure inventories is a very time-demanding job and all possible data-gathering techniques should be used to address the data shortcoming problem. This report focuses on combining direct observation and remote sensing data for the development of exposure models for seismic risk assessment. In this report a summary of the methods for collecting, processing and archiving inventory datasets is provided in Chapter 2. Chapter 3 deals with the integration of different data sources for optimum inventory datasets, whilst Chapters 4, 5 and 6 provide some case studies where combinations between direct observation and remote sensing have been used. The cities of Vienna (Austria), Thessaloniki (Greece) and Messina (Italy) have been chosen to test the proposed approaches.JRC.G.5-European laboratory for structural assessmen
- …