7 research outputs found

    Aerial reconstructions via probabilistic data fusion

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 133-136).In this thesis we propose a probabilistic model that incorporates multi-modal noisy measurements: aerial images and Light Detection and Ranging (LiDAR) to recover scene geometry and appearance in order to build a 3D photo-realistic model of a given scene. In urban environments, these reconstructions have many applications, such as surveillance, and urban planning. The proposed probabilistic model can be viewed as a data fusion model, in which the two data sources complement each other and allow for better results than when only a single one is present. Moreover, this modeling approach has the advantages that it can capture uncertainty in reconstructions, and the ability to incorporate additional scene measurements easily when the sensor models are available. Furthermore, the results obtained with the proposed method are qualitatively comparable to those obtained with traditional structure from motion, despite differences in modeling approach and reconstruction goals. The appearance and geometry trade-off present in the model between the different data sources can be used to obtain a similar (and sometime superior) reconstruction of complex urban scenes with fewer image observations over traditional reconstruction methods. Extending beyond reconstructions, the proposed model has two alluring features: first we are able to determine absolute scale and orientation, and secondly, we are able to detect moving objects. From an implementation standpoint, this thesis has shown how to leverage the power of graphic processing units (GPUs) and parallel programming to allow fast inference. Achieving real time rendering of scenes with hundreds of thousands of geometric primitives and inferring latent appearance, camera pose and geometry in the order of seconds each.by Randi Cabezas.S.M

    Atomic-resolution spectroscopic imaging of ensembles of nanocatalyst particles across the life of a fuel cell

    Full text link
    The thousandfold increase in data-collection speed enabled by aberration-corrected optics allows us to overcome an electron microscopy paradox - how to obtain atomic-resolution chemical structure in individual nanoparticles, yet record a statistically significant sample from an inhomogeneous population. This allowed us to map hundreds of Pt-Co nanoparticles to show atomic-scale elemental distributions across different stages of the catalyst aging in a proton-exchange-membrane fuel cell, and relate Pt-shell thickness to treatment, particle size, surface orientation, and ordering.Comment: 28 pages, 5 figures, accepted, nano letter

    Large-scale probabilistic aerial reconstruction

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 153-167).While much emphasis has been placed on large-scale 3D scene reconstruction from a single data source such as images or distance sensors, models that jointly utilize multiple data types remain largely unexplored. In this work, we will present a Bayesian formulation of scene reconstruction from multi-modal data as well as two critical components that enable large-scale reconstructions with adaptive resolution and high-level scene understanding with meaningful prior-probability distributions. Our first contribution is to formulate the 3D reconstruction problem within the Bayesian framework. We develop an integrated probabilistic model that allows us to naturally represent uncertainty and to fuse complementary information provided by different sensor modalities (imagery and LiDAR). Maximum-a-Posteriori inference within this model leverages GPGPUs for efficient likelihood evaluations. Our dense reconstructions (triangular mesh with texture information) are feasible with fewer observations of a given modality by relaying on others without sacrificing quality. Secondly, to enable large-scale reconstructions our formulation supports adaptive resolutions in both appearance and geometry. This change is motivated by the need for a representation that can adjust to a wide variability in data quality and availability. By coupling edge transformations within a reversible-jump MCMC framework, we allow changes in the number of triangles and mesh connectivity. We demonstrate that these data-driven updates lead to more accurate representations while reducing modeling assumptions and utilizing fewer triangles. Lastly, to enable high-level scene understanding, we include a categorization of reconstruction elements in our formulation. This scene-specific classification of triangles is estimated from semantic annotations (which are noisy and incomplete) and other scene features (e.g., geometry and appearance). The categorization provides a class-specific prior-probability distribution, thus helping to obtain more accurate and interpretable representations by regularizing the reconstruction. Collectively, these models enable complex reasoning about urban scenes by fusing all available data across modalities, a crucial necessity for future autonomous agents and large-scale augmented-reality applications.by Randi Cabezas.Ph. D

    Atomic-Resolution Spectroscopic Imaging of Ensembles of Nanocatalyst Particles Across the Life of a Fuel Cell

    No full text
    The thousand-fold increase in data-collection speed enabled by aberration-corrected optics allows us to overcome an electron microscopy paradox: how to obtain atomic-resolution chemical structure in individual nanoparticles yet record a statistically significant sample from an inhomogeneous population. This allowed us to map hundreds of Pt–Co nanoparticles to show atomic-scale elemental distributions across different stages of the catalyst aging in a proton-exchange-membrane fuel cell, and relate Pt–shell thickness to treatment, particle size, surface orientation, and ordering

    Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    Get PDF
    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well

    Challenges in studying preimplantation embryo-maternal interaction in cattle

    No full text
    11 Pág.A comprehensive understanding of the complex embryo-maternal interactions during the preimplantation period requires the analysis of the very early stages of pregnancy encompassing early embryonic development, maternal recognition and the events leading to implantation. Despite the fact that embryo development until blastocyst stage is somewhat autonomous (i.e., does not require contact with the maternal reproductive tract and can be successfully recapitulated in vitro), many studies on ruminant embryo production have focused on the fundamental question of why: (i) only 30%-40% of immature oocytes develop to the blastocyst stage and (ii) the quality of such blastocysts continually lags behind that of blastocysts produced in vivo. Clear evidence indicates that in vitro culture conditions are far from optimal with deficiencies being manifested in short- and long-term effects on the embryo. Thus, enhanced knowledge of mechanisms controlling embryo-maternal interactions would allow the design of novel strategies to improve in vitro embryo conditions and reproductive outcomes in cattle.This work was supported by the Spanish Ministry of Science, Innovation and Universities (AGL2015-70140-R); Science Founda tion Ireland (13/IA/1983); and European Union H2020 Marie Sklodowska-Curie (MSCA) Innovative Training Network (ITN), REP BIOTECH - 675526. The authors are members of the COST Action 16119 “In vitro 3D total cell guidance and fitness (Cellfit)”.Peer reviewe
    corecore