42 research outputs found

    Structured prediction of unobserved voxels from a single depth image

    Get PDF
    Building a complete 3D model of a scene, given only a single depth image, is underconstrained. To gain a full volumetric model, one needs either multiple views, or a single view together with a library of unambiguous 3D models that will fit the shape of each individual object in the scene. We hypothesize that objects of dissimilar semantic classes often share similar 3D shape components, enabling a limited dataset to model the shape of a wide range of objects, and hence estimate their hidden geometry. Exploring this hypothesis, we propose an algorithm that can complete the unobserved geometry of tabletop-sized objects, based on a supervised model trained on already available volumetric elements. Our model maps from a local observation in a single depth image to an estimate of the surface shape in the surrounding neighborhood. We validate our approach both qualitatively and quantitatively on a range of indoor object collections and challenging real scenes

    From nanometers to centimeters: Imaging across spatial scales with smart computer-aided microscopy

    Get PDF
    Microscopes have been an invaluable tool throughout the history of the life sciences, as they allow researchers to observe the miniscule details of living systems in space and time. However, modern biology studies complex and non-obvious phenotypes and their distributions in populations and thus requires that microscopes evolve from visual aids for anecdotal observation into instruments for objective and quantitative measurements. To this end, many cutting-edge developments in microscopy are fuelled by innovations in the computational processing of the generated images. Computational tools can be applied in the early stages of an experiment, where they allow for reconstruction of images with higher resolution and contrast or more colors compared to raw data. In the final analysis stage, state-of-the-art image analysis pipelines seek to extract interpretable and humanly tractable information from the high-dimensional space of images. In the work presented in this thesis, I performed super-resolution microscopy and wrote image analysis pipelines to derive quantitative information about multiple biological processes. I contributed to studies on the regulation of DNMT1 by implementing machine learning-based segmentation of replication sites in images and performed quantitative statistical analysis of the recruitment of multiple DNMT1 mutants. To study the spatiotemporal distribution of DNA damage response I performed STED microscopy and could provide a lower bound on the size of the elementary spatial units of DNA repair. In this project, I also wrote image analysis pipelines and performed statistical analysis to show a decoupling of DNA density and heterochromatin marks during repair. More on the experimental side, I helped in the establishment of a protocol for many-fold color multiplexing by iterative labelling of diverse structures via DNA hybridization. Turning from small scale details to the distribution of phenotypes in a population, I wrote a reusable pipeline for fitting models of cell cycle stage distribution and inhibition curves to high-throughput measurements to quickly quantify the effects of innovative antiproliferative antibody-drug-conjugates. The main focus of the thesis is BigStitcher, a tool for the management and alignment of terabyte-sized image datasets. Such enormous datasets are nowadays generated routinely with light-sheet microscopy and sample preparation techniques such as clearing or expansion. Their sheer size, high dimensionality and unique optical properties poses a serious bottleneck for researchers and requires specialized processing tools, as the images often do not fit into the main memory of most computers. BigStitcher primarily allows for fast registration of such many-dimensional datasets on conventional hardware using optimized multi-resolution alignment algorithms. The software can also correct a variety of aberrations such as fixed-pattern noise, chromatic shifts and even complex sample-induced distortions. A defining feature of BigStitcher, as well as the various image analysis scripts developed in this work is their interactivity. A central goal was to leverage the user's expertise at key moments and bring innovations from the big data world to the lab with its smaller and much more diverse datasets without replacing scientists with automated black-box pipelines. To this end, BigStitcher was implemented as a user-friendly plug-in for the open source image processing platform Fiji and provides the users with a nearly instantaneous preview of the aligned images and opportunities for manual control of all processing steps. With its powerful features and ease-of-use, BigStitcher paves the way to the routine application of light-sheet microscopy and other methods producing equally large datasets

    From nanometers to centimeters: Imaging across spatial scales with smart computer-aided microscopy

    Get PDF
    Microscopes have been an invaluable tool throughout the history of the life sciences, as they allow researchers to observe the miniscule details of living systems in space and time. However, modern biology studies complex and non-obvious phenotypes and their distributions in populations and thus requires that microscopes evolve from visual aids for anecdotal observation into instruments for objective and quantitative measurements. To this end, many cutting-edge developments in microscopy are fuelled by innovations in the computational processing of the generated images. Computational tools can be applied in the early stages of an experiment, where they allow for reconstruction of images with higher resolution and contrast or more colors compared to raw data. In the final analysis stage, state-of-the-art image analysis pipelines seek to extract interpretable and humanly tractable information from the high-dimensional space of images. In the work presented in this thesis, I performed super-resolution microscopy and wrote image analysis pipelines to derive quantitative information about multiple biological processes. I contributed to studies on the regulation of DNMT1 by implementing machine learning-based segmentation of replication sites in images and performed quantitative statistical analysis of the recruitment of multiple DNMT1 mutants. To study the spatiotemporal distribution of DNA damage response I performed STED microscopy and could provide a lower bound on the size of the elementary spatial units of DNA repair. In this project, I also wrote image analysis pipelines and performed statistical analysis to show a decoupling of DNA density and heterochromatin marks during repair. More on the experimental side, I helped in the establishment of a protocol for many-fold color multiplexing by iterative labelling of diverse structures via DNA hybridization. Turning from small scale details to the distribution of phenotypes in a population, I wrote a reusable pipeline for fitting models of cell cycle stage distribution and inhibition curves to high-throughput measurements to quickly quantify the effects of innovative antiproliferative antibody-drug-conjugates. The main focus of the thesis is BigStitcher, a tool for the management and alignment of terabyte-sized image datasets. Such enormous datasets are nowadays generated routinely with light-sheet microscopy and sample preparation techniques such as clearing or expansion. Their sheer size, high dimensionality and unique optical properties poses a serious bottleneck for researchers and requires specialized processing tools, as the images often do not fit into the main memory of most computers. BigStitcher primarily allows for fast registration of such many-dimensional datasets on conventional hardware using optimized multi-resolution alignment algorithms. The software can also correct a variety of aberrations such as fixed-pattern noise, chromatic shifts and even complex sample-induced distortions. A defining feature of BigStitcher, as well as the various image analysis scripts developed in this work is their interactivity. A central goal was to leverage the user's expertise at key moments and bring innovations from the big data world to the lab with its smaller and much more diverse datasets without replacing scientists with automated black-box pipelines. To this end, BigStitcher was implemented as a user-friendly plug-in for the open source image processing platform Fiji and provides the users with a nearly instantaneous preview of the aligned images and opportunities for manual control of all processing steps. With its powerful features and ease-of-use, BigStitcher paves the way to the routine application of light-sheet microscopy and other methods producing equally large datasets

    Learning to Complete 3D Scenes from Single Depth Images

    Get PDF
    Building a complete 3D model of a scene given only a single depth image is underconstrained. To acquire a full volumetric model, one typically needs either multiple views, or a single view together with a library of unambiguous 3D models that will fit the shape of each individual object in the scene. In this thesis, we present alternative methods for inferring the hidden geometry of table-top scenes. We first introduce two depth-image datasets consisting of multiple scenes, each with a ground truth voxel occupancy grid. We then introduce three methods for predicting voxel occupancy. The first predicts the occupancy of each voxel using a novel feature vector which measures the relationship between the query voxel and surfaces in the scene observed by the depth camera. We use a Random Forest to map each voxel of unknown state to a prediction of occupancy. We observed that predicting the occupancy of each voxel independently can lead to noisy solutions. We hypothesize that objects of dissimilar semantic classes often share similar 3D shape components, enabling a limited dataset to model the shape of a wide range of objects, and hence estimate their hidden geometry. Demonstrating this hypothesis, we propose an algorithm that can make structured completions of unobserved geometry. Finally, we propose an alternative framework for understanding the 3D geometry of scenes using the observation that individual objects can appear in multiple different scenes, but in different configurations. We introduce a supervised method to find regions corresponding to the same object across different scenes. We demonstrate that it is possible to then use these groupings of partially observed objects to reconstruct missing geometry. We then perform a critical review of the approaches we have taken, including an assessment of our metrics and datasets, before proposing extensions and future work

    Biosensors

    Get PDF
    A biosensor is defined as a detecting device that combines a transducer with a biologically sensitive and selective component. When a specific target molecule interacts with the biological component, a signal is produced, at transducer level, proportional to the concentration of the substance. Therefore biosensors can measure compounds present in the environment, chemical processes, food and human body at low cost if compared with traditional analytical techniques. This book covers a wide range of aspects and issues related to biosensor technology, bringing together researchers from 11 different countries. The book consists of 16 chapters written by 53 authors. The first four chapters describe several aspects of nanotechnology applied to biosensors. The subsequent section, including three chapters, is devoted to biosensor applications in the fields of drug discovery, diagnostics and bacteria detection. The principles behind optical biosensors and some of their application are discussed in chapters from 8 to 11. The last five chapters treat of microelectronics, interfacing circuits, signal transmission, biotelemetry and algorithms applied to biosensing

    Cortical control of forelimb movement

    Get PDF
    Cortical control of movement is mediated by wide-spread projections impacting many nervous system regions in a top-down manner. Although much knowledge about cortical circuitry has been accumulated from local cortical microcircuits, cortico-cortical and cortico-subcortical networks, how cortex communicates to regions closer to motor execution, including the brainstem, is less well understood. In this dissertation, we investigate the organization of cortico-medulla projections and their roles in controlling forelimb movement. We focus on anatomical and functional relationships between cortex and lateral rostral medulla (LatRM), a region in caudal brainstem which is shown to be key in the control of forelimb movement. Our findings reveal the precise anatomical and functional organization between different cortical regions and matched postsynaptic neurons in the caudal brainstem, tuned to different phases of one carefully orchestrated behavior, which advance the our knowledge on circuit mechanisms involved in the control of body movements, and unravel the logic of how the top-level control region in the mammalian nervous system – the cortex – intersects with a high degree of specificity with command centers in the brainstem and beyond

    Dynamics of stochastic membrane rupture events

    Get PDF
    corecore