18,152 research outputs found
Exploiting low-cost 3D imagery for the purposes of detecting and analyzing pavement distresses
Road pavement conditions have significant impacts on safety, travel times, costs, and environmental effects. It is the responsibility of road agencies to ensure these conditions are kept in an acceptable state. To this end, agencies are tasked with implementing pavement management systems (PMSs) which effectively allocate resources towards maintenance and rehabilitation. These systems, however, require accurate data. Currently, most agencies rely on manual distress surveys and as a result, there is significant research into quick and low-cost pavement distress identification methods. Recent proposals have included the use of structure-from-motion techniques based on datasets from unmanned aerial vehicles (UAVs) and cameras, producing accurate 3D models and associated point clouds. The challenge with these datasets is then identifying and describing distresses. This paper focuses on utilizing images of pavement distresses in the city of Palermo, Italy produced by mobile phone cameras. The work aims at assessing the accuracy of using mobile phones for these surveys and also identifying strategies to segment generated 3D imagery by considering the use of algorithms for 3D Image segmentation to detect shapes from point clouds to enable measurement of physical parameters and severity assessment. Case studies are considered for pavement distresses defined by the measurement of the area affected such as different types of cracking and depressions. The use of mobile phones and the identification of these patterns on the 3D models provide further steps towards low-cost data acquisition and analysis for a PMS
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Linear chemically sensitive electron tomography using DualEELS and dictionary-based compressed sensing
We have investigated the use of DualEELS in elementally sensitive tilt series tomography in the scanning transmission electron microscope. A procedure is implemented using deconvolution to remove the effects of multiple scattering, followed by normalisation by the zero loss peak intensity. This is performed to produce a signal that is linearly dependent on the projected density of the element in each pixel. This method is compared with one that does not include deconvolution (although normalisation by the zero loss peak intensity is still performed). Additionaly, we compare the 3D reconstruction using a new compressed sensing algorithm, DLET, with the well-established SIRT algorithm. VC precipitates, which are extracted from a steel on a carbon replica, are used in this study. It is found that the use of this linear signal results in a very even density throughout the precipitates. However, when deconvolution is omitted, a slight density reduction is observed in the cores of the precipitates (a so-called cupping artefact). Additionally, it is clearly demonstrated that the 3D morphology is much better reproduced using the DLET algorithm, with very little elongation in the missing wedge direction. It is therefore concluded that reliable elementally sensitive tilt tomography using EELS requires the appropriate use of DualEELS together with a suitable reconstruction algorithm, such as the compressed sensing based reconstruction algorithm used here, to make the best use of the limited data volume and signal to noise inherent in core-loss EELS
Three-dimensional reconstruction of the tissue-specific multielemental distribution within Ceriodaphnia dubia via multimodal registration using laser ablation ICP-mass spectrometry and X-ray spectroscopic techniques
In this work, the three-dimensional elemental, distribution profile within the freshwater crustacean Ceriodaphnia dubia was constructed at a spatial resolution down to S mu m via a data, fusion approach employing state-of-the-art laser ablation inductively coupled plasma-time-of-flight mass spectrometry (LAICP-TOFMS) and laboratory-based absorption microcomputed tomography (mu-CT). C. dubia was exposed to elevated Cu, Ni, and Zn concentrations, chemically fixed, dehydrated, stained, and embedded, prior to mu-CT analysis. Subsequently, the sample was cut into 5 pm thin sections that were subjected to LA-ICPTOFMS imaging. Multimodal image registration was performed to spatially align the 2D LA-ICP-TOFMS images relative to the Corresponding slices of the 3D mu-CT reconstruction. Mass channels corresponding to the isotopes of a single element were merged to improve the signal-to-noise ratios within the elemental images. In order to aid the visual interpretation of the data, LA-ICP-TOEMS data wete projected onto the mu-CT voxels representing tissue. Additionally, the image resolution and elemental sensitivity were compared to those obtained with synchrotron radiation based 3D confocal mu-X-ray fluorescence imaging upon a chemically fixed and air-dried C. dubia specimen
Detection of leukocytes stained with acridine orange using unique spectral features acquired from an image-based spectrometer
A leukocyte differential count can be used to diagnosis a myriad blood disorders, such as infections, allergies, and efficacy of disease treatments. In recent years, attention has been focused on developing point-of-care (POC) systems to provide this test in global health settings. Acridine orange (AO) is an amphipathic, vital dye that intercalates leukocyte nucleic acids and acidic vesicles. It has been utilized by POC systems to identify the three main leukocyte subtypes: granulocytes, monocytes, and lymphocytes. Subtypes of leukocytes can be characterized using a fluorescence microscope, where the AO has a 450 nm excitation wavelength and has two peak emission wavelengths between 525 nm (green) and 650 nm (red), depending on the cellular content and concentration of AO in the cells. The full spectra of AO stained leukocytes has not been fully explored for POC applications. Optical instruments, such as a spectrometer that utilizes a diffraction grating, can give specific spectral data by separating polychromatic light into distinct wavelengths. The spectral data from this setup can be used to create object-specific emission profiles.
Yellow-green and crimson microspheres were used to model the emission peaks and profiles of AO stained leukocytes. Whole blood was collected via finger stick and stained with AO to gather preliminary leukocyte emission profiles. A MATLAB algorithm was designed to analyze the spectral data within the images acquired using the image-based spectrometer. The algorithm utilized watershed segmentation and centroid location functions to isolate independent spectra from an image. The output spectra represent the average line intensity profiles for each pixel across a slice of an object. First steps were also taken in processing video frames of manually translated microspheres. The high-speed frame rate allowed objects to appear in multiple consecutive images. A function was applied to each image cycle to identify repeating centroid locations.
The yellow-green (515 nm) and crimson (645 nm) microspheres exhibited a distinct separation in colorimetric emission with a peak-to-peak difference of 36 pixels, which is related to the 130 nm peak emission difference. Two AO stained leukocytes exhibited distinct spectral profiles and peaks across different wavelengths. This could be due to variations in the staining method (incubation period and concentration) effecting the emissions or variations in cellular content indicating different leukocyte subtypes. The algorithm was also effective when isolating unique centroids between video frames.
We have demonstrated the ability to extract spectral information from data acquired from the image-based spectrometer of microspheres, as a control, and AO stained leukocytes. We determined that the spectral information from yellow-green and crimson microspheres could be used to represent the wavelength range of AO stained leukocytes, thus providing a calibration tool. Also, preliminary spectral information was successfully extracted from yellow-green microspheres translated under the linear slit using stationary images and video frames, thus demonstrating the feasibility of collecting data from a large number of objects
Multiscale Phenomenology of the Cosmic Web
We analyze the structure and connectivity of the distinct morphologies that
define the Cosmic Web. With the help of our Multiscale Morphology Filter (MMF),
we dissect the matter distribution of a cosmological CDM N-body
computer simulation into cluster, filaments and walls. The MMF is ideally
suited to adress both the anisotropic morphological character of filaments and
sheets, as well as the multiscale nature of the hierarchically evolved cosmic
matter distribution. The results of our study may be summarized as follows:
i).- While all morphologies occupy a roughly well defined range in density,
this alone is not sufficient to differentiate between them given their overlap.
Environment defined only in terms of density fails to incorporate the intrinsic
dynamics of each morphology. This plays an important role in both linear and
non linear interactions between haloes. ii).- Most of the mass in the Universe
is concentrated in filaments, narrowly followed by clusters. In terms of
volume, clusters only represent a minute fraction, and filaments not more than
9%. Walls are relatively inconspicous in terms of mass and volume. iii).- On
average, massive clusters are connected to more filaments than low mass
clusters. Clusters with M h have on average
two connecting filaments, while clusters with M
h have on average five connecting filaments. iv).- Density profiles
indicate that the typical width of filaments is 2\Mpch. Walls have less well
defined boundaries with widths between 5-8 Mpc h. In their interior,
filaments have a power-law density profile with slope ,
corresponding to an isothermal density profile.Comment: 28 pages, 22 figures, accepted for publication in MNRAS. For a
high-res version see http://www.astro.rug.nl/~weygaert/webmorph_mmf.pd
Deep and superficial amygdala nuclei projections revealed in vivo by probabilistic tractography
Copyright © 2011 Society for Neuroscience and the authors. The The Journal of Neuroscience uses a Creative Commons Attribution-NonCommercial-ShareAlike licence: http://creativecommons.org/licenses/by-nc-sa/4.0/.Despite a homogenous macroscopic appearance on magnetic resonance images, subregions of the amygdala express distinct functional profiles as well as corresponding differences in connectivity. In particular, histological analysis shows stronger connections for superficial (i.e., centromedial and cortical), compared with deep (i.e., basolateral and other), amygdala nuclei to lateral orbitofrontal cortex and stronger connections of deep compared with superficial, nuclei to polymodal areas in the temporal pole. Here, we use diffusion weighted imaging with probabilistic tractography to investigate these connections in humans. We use a data-driven approach to segment the amygdala into two subregions using k-means clustering. The identified subregions are spatially contiguous and their location corresponds to deep and superficial nuclear groups. Quantification of the connection strength between these amygdala clusters and individual target regions corresponds to qualitative histological findings in non-human primates, indicating such findings can be extrapolated to humans. We propose that connectivity profiles provide a potentially powerful approach for in vivo amygdala parcellation and can serve as a guide in studies that exploit functional and anatomical neuroimaging.The Wellcome Trust, a Max Planck Research Award and Swiss National Science Foundation
HeadOn: Real-time Reenactment of Human Portrait Videos
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at
Siggraph'1
- …