244 research outputs found

    Correlated Multimodal Imaging in Life Sciences:Expanding the Biomedical Horizon

    Get PDF
    International audienceThe frontiers of bioimaging are currently being pushed toward the integration and correlation of several modalities to tackle biomedical research questions holistically and across multiple scales. Correlated Multimodal Imaging (CMI) gathers information about exactly the same specimen with two or more complementary modalities that-in combination-create a composite and complementary view of the sample (including insights into structure, function, dynamics and molecular composition). CMI allows to describe biomedical processes within their overall spatio-temporal context and gain a mechanistic understanding of cells, tissues, diseases or organisms by untangling their molecular mechanisms within their native environment. The two best-established CMI implementations for small animals and model organisms are hardware-fused platforms in preclinical imaging (Hybrid Imaging) and Correlated Light and Electron Microscopy (CLEM) in biological imaging. Although the merits of Preclinical Hybrid Imaging (PHI) and CLEM are well-established, both approaches would benefit from standardization of protocols, ontologies and data handling, and the development of optimized and advanced implementations. Specifically, CMI pipelines that aim at bridging preclinical and biological imaging beyond CLEM and PHI are rare but bear great potential to substantially advance both bioimaging and biomedical research. CMI faces three mai

    Image-guided ToF depth upsampling: a survey

    Get PDF
    Recently, there has been remarkable growth of interest in the development and applications of time-of-flight (ToF) depth cameras. Despite the permanent improvement of their characteristics, the practical applicability of ToF cameras is still limited by low resolution and quality of depth measurements. This has motivated many researchers to combine ToF cameras with other sensors in order to enhance and upsample depth images. In this paper, we review the approaches that couple ToF depth images with high-resolution optical images. Other classes of upsampling methods are also briefly discussed. Finally, we provide an overview of performance evaluation tests presented in the related studies

    INVESTIGATING THE MECHANISM OF BACTERIAL CELL DIVISION WITH SUPERRESOLUTION MICROSCOPY

    Get PDF
    The molecular mechanisms that drive bacterial cytokinesis are attractive antibiotic targets that remain poorly understood. The machinery that performs cytokinesis in bacteria has been termed the 'divisome' (see Chapter 1 for description). The most widely-conserved divisome protein, FtsZ, is an essential tubulin homolog that polymerizes into protofilaments in a nucleotide-dependent manner. These protofilaments assemble at midcell to form the ‘Z-ring’, which has been the prevailing candidate for constrictive force generation during cell division. However, it has been difficult to experimentally test proposed Z-ring force generation models in vivo due to the small size of bacteria (< 1 μm diameter for E. coli) compared to the diffraction-limited resolution of light (~ 0.3 μm). In this work, quantitative superresolution and time-lapse microscopy were applied to examine whether Z-ring structure and function indeed play limiting roles in driving E. coli cell constriction (Chapter 2). Surprisingly, these studies revealed that the rate of septum closure during constriction is robust to substantial changes in many Z-ring properties, including the GTPase activity of FtsZ, molecular density of the Z-ring, the timing of Z-ring disassembly, and the absence of Z-ring assembly regulators. Further investigation revealed that septum closure rate is instead highly coupled to the rate of cell wall growth and elongation, and can be modulated by coordination with chromosome segregation. Taken together, these results challenge the Z-ring centric view of constriction force generation, and suggest that cell wall synthesis and chromosome segregation likely drive the rate and progress of cell constriction in bacteria. These investigations were made possible by advancements in quantitative superresolution microscopy techniques (see Chapter 3 for overview). One major obstacle encountered during the course of this work, and shared by those utilizing localization-based superresolution microscopy techniques, was the overestimation of molecule numbers caused by fluorophore photoblinking. Thus, Chapter 4 describes a systematic characterization of the effects of photoblinking on the accurate construction and analysis of superresolution images. These characterizations enabled the development of a simple method to identify the optimal clustering thresholds and an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm

    Precise Depth Image Based Real-Time 3D Difference Detection

    Get PDF
    3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. This thesis introduces real-time 3D difference detection with a hand-held depth camera. In contrast to previous works, with the proposed approach, geometric differences can be detected in real time and from arbitrary viewpoints. Therefore, the scan position of the 3D difference detection be changed on the fly, during the 3D scan. Thus, the user can move the scan position closer to the object to inspect details or to bypass occlusions. The main research questions addressed by this thesis are: Q1: How can 3D differences be detected in real time and from arbitrary viewpoints using a single depth camera? Q2: Extending the first question, how can 3D differences be detected with a high precision? Q3: Which accuracy can be achieved with concrete setups of the proposed concept for real time, depth image based 3D difference detection? This thesis answers Q1 by introducing a real-time approach for depth image based 3D difference detection. The real-time difference detection is based on an algorithm which maps the 3D measurements of a depth camera onto an arbitrary 3D model in real time by fusing computer vision (depth imaging and pose estimation) with a computer graphics based analysis-by-synthesis approach. Then, this thesis answers Q2 by providing solutions for enhancing the 3D difference detection accuracy, both by precise pose estimation and by reducing depth measurement noise. A precise variant of the 3D difference detection concept is proposed, which combines two main aspects. First, the precision of the depth camera’s pose estimation is improved by coupling the depth camera with a very precise coordinate measuring machine. Second, measurement noise of the captured depth images is reduced and missing depth information is filled in by extending the 3D difference detection with 3D reconstruction. The accuracy of the proposed 3D difference detection is quantified by a quantitative evaluation. This provides an anwer to Q3. The accuracy is evaluated both for the basic setup and for the variants that focus on a high precision. The quantitative evaluation using real-world data covers both the accuracy which can be achieved with a time-of-flight camera (SwissRanger 4000) and with a structured light depth camera (Kinect). With the basic setup and the structured light depth camera, differences of 8 to 24 millimeters can be detected from one meter measurement distance. With the enhancements proposed for precise 3D difference detection, differences of 4 to 12 millimeters can be detected from one meter measurement distance using the same depth camera. By solving the challenges described by the three research question, this thesis provides a solution for precise real-time 3D difference detection based on depth images. With the approach proposed in this thesis, dense 3D differences can be detected in real time and from arbitrary viewpoints using a single depth camera. Furthermore, by coupling the depth camera with a coordinate measuring machine and by integrating 3D reconstruction in the 3D difference detection, 3D differences can be detected in real time and with a high precision

    Enhancing Multi-View 3D-Reconstruction Using Multi-Frame Super Resolution

    Get PDF
    Multi-view stereo is a popular method for 3D-reconstruction. Super resolution is a technique used to produce high resolution output from low resolution input. Since the quality of 3D-reconstruction is directly dependent on the input, a simple path is to improve the resolution of the input. In this dissertation, we explore the idea of using super resolution to improve 3D-reconstruction at the input stage of the multi-view stereo framework. In particular, we show that multi-view stereo when combined with multi-frame super resolution produces a more accurate 3D-reconstruction. The proposed method utilizes images with sub-pixel camera movements to produce high resolution output. This enhanced output is fed through the multi-view stereo pipeline to produce an improved 3D-model. As a performance test, the improved 3D-model is compared to similarly generated 3D-reconstructions using bicubic and single image super resolution at the input stage of the multi-view stereo framework. This is done by comparing the point clouds of the generated models to a reference model using the metrics: average, median, and max distance. The model that has the metrics that are closest to the reference model is considered to be the better model. The overall experimental results show that the generated models, using our technique, have point clouds with average mean, median, and max distances of 4.3\%, 8.8\%, and 6\% closer to the reference model, respectively. This indicates an improvement in 3D-reconstruction using our technique. In addition, our technique has a significant speed advantage over the single image super resolution analogs being at least 6.8x faster. The use of multi-frame super resolution in conjunction with the multi-view stereo framework is a practical solution for enhancing the quality of 3D-reconstruction and shows promising results over single image up-sampling techniques

    Optical Coherence Tomography and Its Non-medical Applications

    Get PDF
    Optical coherence tomography (OCT) is a promising non-invasive non-contact 3D imaging technique that can be used to evaluate and inspect material surfaces, multilayer polymer films, fiber coils, and coatings. OCT can be used for the examination of cultural heritage objects and 3D imaging of microstructures. With subsurface 3D fingerprint imaging capability, OCT could be a valuable tool for enhancing security in biometric applications. OCT can also be used for the evaluation of fastener flushness for improving aerodynamic performance of high-speed aircraft. More and more OCT non-medical applications are emerging. In this book, we present some recent advancements in OCT technology and non-medical applications

    Nuclear accessibility of beta-actin mRNA is measured by 3D single-molecule real-time tracking

    Get PDF
    Imaging single proteins or RNAs allows direct visualization of the inner workings of the cell. Typically, three-dimensional (3D) images are acquired by sequentially capturing a series of 2D sections. The time required to step through the sample often impedes imaging of large numbers of rapidly moving molecules. Here we applied multifocus microscopy (MFM) to instantaneously capture 3D single-molecule real-time images in live cells, visualizing cell nuclei at 10 volumes per second. We developed image analysis techniques to analyze messenger RNA (mRNA) diffusion in the entire volume of the nucleus. Combining MFM with precise registration between fluorescently labeled mRNA, nuclear pore complexes, and chromatin, we obtained globally optimal image alignment within 80-nm precision using transformation models. We show that beta-actin mRNAs freely access the entire nucleus and fewer than 60% of mRNAs are more than 0.5 microm away from a nuclear pore, and we do so for the first time accounting for spatial inhomogeneity of nuclear organization

    Single-molecule localization microscopy analysis with ImageJ

    Get PDF
    ImageJ is a versatile and powerful tool for quantitative image analysis in microscopy. It is open-source software, platform-independent and enables students and researchers to obtain an easy but thorough introduction into image analysis. Especially the image processing package Fiji is a valuable and powerful extension of ImageJ. Several plugins and macros for single-molecule localization microscopy (SMLM) have been developed during the last decade. These novel tools cover the steps from single-molecule localization and image reconstruction to SMLM data postprocessing such as density analysis, image registration or resolution estimation. This article describes how ImageJ/Fiji can be used for image analysis, reviews existing extensions for SMLM, and aims at introducing and motivating novices and advanced SMLM users alike to explore the possibilities of ImageJ/Fiji for automated and quantitative data analysis
    • …
    corecore