33,624 research outputs found

    MAxSIM: Multi-Angle-Crossing Structured Illumination Microscopy With Height-Controlled Mirror for 3D Topological Mapping of Live Cells

    Get PDF
    Mapping 3D plasma membrane topology in live cells can bring unprecedented insights into cell biology. Widefield-based super-resolution methods such as 3D-structured illumination microscopy (3D-SIM) can achieve twice the axial ( ~ 300 nm) and lateral ( ~ 100 nm) resolution of widefield microscopy in real time in live cells. However, twice-resolution enhancement cannot sufficiently visualize nanoscale fine structures of the plasma membrane. Axial interferometry methods including fluorescence light interference contrast microscopy and its derivatives (e.g., scanning angle interference microscopy) can determine nanoscale axial locations of proteins on and near the plasma membrane. Thus, by combining super-resolution lateral imaging of 2D-SIM with axial interferometry, we developed multi-angle-crossing structured illumination microscopy (MAxSIM) to generate multiple incident angles by fast, optoelectronic creation of diffraction patterns. Axial localization accuracy can be enhanced by placing cells on a bottom glass substrate, locating a custom height-controlled mirror (HCM) at a fixed axial position above the glass substrate, and optimizing the height reconstruction algorithm for noisy experimental data. The HCM also enables imaging of both the apical and basal surfaces of a cell. MAxSIM with HCM offers high-fidelity nanoscale 3D topological mapping of cell plasma membranes with near-real-time ( ~ 0.5 Hz) imaging of live cells and 3D single-molecule tracking

    Simultaneous super-resolution, tracking and mapping

    Get PDF
    This paper proposes a new visual SLAM technique that not only integrates 6DOF pose and dense structure but also simultaneously integrates the color information contained in the images over time. This involves developing an inverse model for creating a super-resolution map from many low resolution images. Contrary to classic super-resolution techniques, this is achieved here by taking into account full 3D translation and rotation within a dense localisation and mapping framework. This not only allows to take into account the full range of image deformations but also allows to propose a novel criteria for combining the low resolution images together based on the difference in resolution between different images in 6D space. Several results are given showing that this technique runs in real-time (30Hz) and is able to map large scale environments in high-resolution whilst simultaneously improving the accuracy and robustness of the tracking

    Computational localization microscopy with extended axial range

    Get PDF
    A new single-aperture 3D particle-localization and tracking technique is presented that demonstrates an increase in depth range by more than an order of magnitude without compromising optical resolution and throughput. We exploit the extended depth range and depth-dependent translation of an Airy-beam PSF for 3D localization over an extended volume in a single snapshot. The technique is applicable to all bright-field and fluorescence modalities for particle localization and tracking, ranging from super-resolution microscopy through to the tracking of fluorescent beads and endogenous particles within cells. We demonstrate and validate its application to real-time 3D velocity imaging of fluid flow in capillaries using fluorescent tracer beads. An axial localization precision of 50 nm was obtained over a depth range of 120μm using a 0.4NA, 20× microscope objective. We believe this to be the highest ratio of axial range-to-precision reported to date

    3D Face Tracking and Texture Fusion in the Wild

    Full text link
    We present a fully automatic approach to real-time 3D face reconstruction from monocular in-the-wild videos. With the use of a cascaded-regressor based face tracking and a 3D Morphable Face Model shape fitting, we obtain a semi-dense 3D face shape. We further use the texture information from multiple frames to build a holistic 3D face representation from the video frames. Our system is able to capture facial expressions and does not require any person-specific training. We demonstrate the robustness of our approach on the challenging 300 Videos in the Wild (300-VW) dataset. Our real-time fitting framework is available as an open source library at http://4dface.org

    Real-Time Panoramic Tracking for Event Cameras

    Full text link
    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset and self-recorded sequences.Comment: Accepted to International Conference on Computational Photography 201

    On using gait to enhance frontal face extraction

    No full text
    Visual surveillance finds increasing deployment formonitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. In surveillance environments, it is necessary to handle pose variation of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3-D head motion and gait trajectory, with super-resolution analysis. We use region- and distance-based refinement of head pose estimation. We develop a direct mapping to relate the 2-D image with a 3-D model. In gait trajectory analysis, we model the looming effect so as to obtain the correct face region. Based on head position and the gait trajectory, we can reconstruct high-quality frontal face images which are demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3-D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction process allowing for deployment in surveillance scenario

    Fuzzy Fibers: Uncertainty in dMRI Tractography

    Full text link
    Fiber tracking based on diffusion weighted Magnetic Resonance Imaging (dMRI) allows for noninvasive reconstruction of fiber bundles in the human brain. In this chapter, we discuss sources of error and uncertainty in this technique, and review strategies that afford a more reliable interpretation of the results. This includes methods for computing and rendering probabilistic tractograms, which estimate precision in the face of measurement noise and artifacts. However, we also address aspects that have received less attention so far, such as model selection, partial voluming, and the impact of parameters, both in preprocessing and in fiber tracking itself. We conclude by giving impulses for future research

    Super-Resolution Microscopy: A Virus’ Eye View of the Cell

    Get PDF
    It is difficult to observe the molecular choreography between viruses and host cell components, as they exist on a spatial scale beyond the reach of conventional microscopy. However, novel super-resolution microscopy techniques have cast aside technical limitations to reveal a nanoscale view of virus replication and cell biology. This article provides an introduction to super-resolution imaging; in particular, localisation microscopy, and explores the application of such technologies to the study of viruses and tetraspanins, the topic of this special issue
    corecore