4,775 research outputs found
Magnetic Resonance Imaging of the Brain in Moving Subjects. Application of Fetal, Neonatal and Adult Brain Studies
Imaging in the presence of subject motion has been an ongoing challenge for
magnetic resonance imaging (MRI). Motion makes MRI data inconsistent, causing
artifacts in conventional anatomical imaging as well as invalidating diffusion
tensor imaging (DTI) reconstruction. In this thesis some of the important issues
regarding the acquisition and reconstruction of anatomical and DTI imaging of
moving subjects are addressed; methods to achieve high resolution and high signalto-
noise ratio (SNR) volume data are proposed.
An approach has been developed that uses multiple overlapped dynamic single shot
slice by slice imaging combined with retrospective alignment and data fusion to
produce self consistent 3D volume images under subject motion. We term this
method as snapshot MRI with volume reconstruction or SVR. The SVR method
has been performed successfully for brain studies on subjects that cannot stay still,
and in some cases were moving substantially during scanning. For example, awake
neonates, deliberately moved adults and, especially, on fetuses, for which no
conventional high resolution 3D method is currently available. Fine structure of the
in-utero fetal brain is clearly revealed for the first time with substantially improved
SNR. The SVR method has been extended to correct motion artifacts from
conventional multi-slice sequences when the subject drifts in position during data
acquisition.
Besides anatomical imaging, the SVR method has also been further extended to
DTI reconstruction when there is subject motion. This has been validated
successfully from an adult who was deliberately moving and then applied to inutero
fetal brain imaging, which no conventional high resolution 3D method is
currently available. Excellent fetal brain 3D apparent diffusion coefficient (ADC)
maps in high resolution have been achieved for the first time as well as promising
fractional Anisotropy (FA) maps.
Pilot clinical studies using SVR reconstructed data to study fetal brain development
in-utero have been performed. Growth curves for the normally developing fetal
brain have been devised by the quantification of cerebral and cerebellar volumes as
well as some one dimensional measurements. A Verhulst model is proposed to
describe these growth curves, and this approach has achieved a correlation over
0.99 between the fitted model and actual data
Sensor architectures and technologies for upper limb 3d surface reconstruction: A review
3D digital models of the upper limb anatomy represent the starting point for the design process of bespoke devices, such as orthoses and prostheses, which can be modeled on the actual patient’s anatomy by using CAD (Computer Aided Design) tools. The ongoing research on optical scanning methodologies has allowed the development of technologies that allow the surface reconstruction of the upper limb anatomy through procedures characterized by minimum discomfort for the patient. However, the 3D optical scanning of upper limbs is a complex task that requires solving problematic aspects, such as the difficulty of keeping the hand in a stable position and the presence of artefacts due to involuntary movements. Scientific literature, indeed, investigated different approaches in this regard by either integrating commercial devices, to create customized sensor architectures, or by developing innovative 3D acquisition techniques. The present work is aimed at presenting an overview of the state of the art of optical technologies and sensor architectures for the surface acquisition of upper limb anatomies. The review analyzes the working principles at the basis of existing devices and proposes a categorization of the approaches based on handling, pre/post-processing effort, and potentialities in real-time scanning. An in-depth analysis of strengths and weaknesses of the approaches proposed by the research community is also provided to give valuable support in selecting the most appropriate solution for the specific application to be addressed
Detection and elimination of rock face vegetation from terrestrial LIDAR data using the virtual articulating conical probe algorithm
A common use of terrestrial lidar is to conduct studies involving change detection of natural or engineered surfaces. Change detection involves many technical steps beyond the initial data acquisition: data structuring, registration, and elimination of data artifacts such as parallax errors, near-field obstructions, and vegetation. Of these, vegetation detection and elimination with terrestrial lidar scanning (TLS) presents a completely different set of issues when compared to vegetation elimination from aerial lidar scanning (ALS). With ALS, the ground footprint of the lidar laser beam is very large, and the data acquisition hardware supports multi-return waveforms. Also, the underlying surface topography is relatively smooth compared to the overlying vegetation which has a high spatial frequency. On the other hand, with most TLS systems, the width of the lidar laser beam is very small, and the data acquisition hardware supports only first-return signals. For the case where vegetation is covering a rock face, the underlying rock surface is not smooth because rock joints and sharp block edges have a high spatial frequency very similar to the overlying vegetation. Traditional ALS approaches to eliminate vegetation take advantage of the contrast in spatial frequency between the underlying ground surface and the overlying vegetation. When the ALS approach is used on vegetated rock faces, the algorithm, as expected, eliminates the vegetation, but also digitally erodes the sharp corners of the underlying rock. A new method that analyzes the slope of a surface along with relative depth and contiguity information is proposed as a way of differentiating high spatial frequency vegetative cover from similar high spatial frequency rock surfaces. This method, named the Virtual Articulating Conical Probe (VACP) algorithm, offers a solution for detection and elimination of rock face vegetation from TLS point cloud data while not affecting the geometry of the underlying rock surface. Such a tool could prove invaluable to the geotechnical engineer for quantifying rates of vertical-face rock loss that impact civil infrastructure safety --Abstract, page iii
Recommended from our members
Real-time spatial modeling to detect and track resources on construction sites
For more than 10 years the U.S. construction industry has experienced over 1,000
fatalities annually. Many fatalities may have been prevented had the individuals and
equipment involved been more aware of and alert to the physical state of the environment
around them. Awareness may be improved by automatic 3D (three-dimensional) sensing
and modeling of the job site environment in real-time. Existing 3D modeling approaches
based on range scanning techniques are capable of modeling static objects only, and thus
cannot model in real-time dynamic objects in an environment comprised of moving
humans, equipment, and materials. Emerging prototype 3D video range cameras offer
another alternative by facilitating affordable, wide field of view, automated static and
dynamic object detection and tracking at frame rates better than 1Hz (real-time).
This dissertation presents an imperical work and methodology to rapidly create a
spatial model of construction sites and in particular to detect, model, and track the position, dimension, direction, and velocity of static and moving project resources in real-time, based on range data obtained from a three-dimensional video range camera in a
static or moving position. Existing construction site 3D modeling approaches based on
optical range sensing technologies (laser scanners, rangefinders, etc.) and 3D modeling
approaches (dense, sparse, etc.) that offered potential solutions for this research are
reviewed. The choice of an emerging sensing tool and preliminary experiments with this
prototype sensing technology are discussed. These findings led to the development of a
range data processing algorithm based on three-dimensional occupancy grids which is
demonstrated in detail. Testing and validation of the proposed algorithms have been
conducted to quantify the performance of sensor and algorithm through extensive
experimentation involving static and moving objects. Experiments in indoor laboratory
and outdoor construction environments have been conducted with construction resources
such as humans, equipment, materials, or structures to verify the accuracy of the
occupancy grid modeling approach. Results show that modeling objects and measuring
their position, dimension, direction, and speed had an accuracy level compatible to the
requirements of active safety features for construction. Results demonstrate that video
rate 3D data acquisition and analysis of construction environments can support effective
detection, tracking, and convex hull modeling of objects. Exploiting rapidly generated
three-dimensional models for improved visualization, communications, and process
control has inherent value, broad application, and potential impact, e.g. as-built vs. as-planned comparison, condition assessment, maintenance, operations, and construction
activities control. In combination with effective management practices, this sensing
approach has the potential to assist equipment operators to avoid incidents that result in
reduce human injury, death, or collateral damage on construction sites.Civil, Architectural, and Environmental Engineerin
Automatic Alignment of 3D Multi-Sensor Point Clouds
Automatic 3D point cloud alignment is a major research topic in photogrammetry, computer vision and computer graphics. In this research, two keypoint feature matching approaches have been developed and proposed for the automatic alignment of 3D point clouds, which have been acquired from different sensor platforms and are in different 3D conformal coordinate systems.
The first proposed approach is based on 3D keypoint feature matching. First, surface curvature information is utilized for scale-invariant 3D keypoint extraction. Adaptive non-maxima suppression (ANMS) is then applied to retain the most distinct and well-distributed set of keypoints. Afterwards, every keypoint is characterized by a scale, rotation and translation invariant 3D surface descriptor, called the radial geodesic distance-slope histogram. Similar keypoints descriptors on the source and target datasets are then matched using bipartite graph matching, followed by a modified-RANSAC for outlier removal.
The second proposed method is based on 2D keypoint matching performed on height map images of the 3D point clouds. Height map images are generated by projecting the 3D point clouds onto a planimetric plane. Afterwards, a multi-scale wavelet 2D keypoint detector with ANMS is proposed to extract keypoints on the height maps. Then, a scale, rotation and translation-invariant 2D descriptor referred to as the Gabor, Log-Polar-Rapid Transform descriptor is computed for all keypoints. Finally, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour matching, together with the modified-RANSAC for outlier removal.
Each method is assessed on multi-sensor, urban and non-urban 3D point cloud datasets. Results show that unlike the 3D-based method, the height map-based approach is able to align source and target datasets with differences in point density, point distribution and missing point data. Findings also show that the 3D-based method obtained lower transformation errors and a greater number of correspondences when the source and target have similar point characteristics. The 3D-based approach attained absolute mean alignment differences in the range of 0.23m to 2.81m, whereas the height map approach had a range from 0.17m to 1.21m. These differences meet the proximity requirements of the data characteristics and the further application of fine co-registration approaches
3D SEM Surface Reconstruction from Multi-View Images
The scanning electron microscope (SEM), a promising imaging equipment has been used to determine the surface properties such as compositions or geometries of specimens by achieving increased magnification, contrast, and resolution. SEM micro-graphs, however, remain two-dimensional (2D). The knowledge and information about their three-dimensional (3D) surface structures are critical in many real-world applications. Having 3D surfaces from SEM images provides true anatomic shapes of micro-scale samples which allow for quantitative measurements and informative visualization of the systems being investigated. A novel multi-view approach for reconstruction of SEM images is demonstrated in this research project. This thesis focuses on the 3D SEM surface reconstruction from multi-view images. We investigate an approach to reconstruction of 3D surfaces from stereo SEM image pairs and then discuss how 3D point clouds may be registered to generate more complete 3D shapes from multi-views of the microscopic specimen. Then we introduce a method that uses an algorithm called KAZE, which reconstructs 3D surfaces from multiple views of objects. Then Numerous results are presented to show the effectiveness of the presented approaches
- …