6 research outputs found

    Comparison of Multiscale Imaging Methods for Brain Research

    Get PDF
    A major challenge in neuroscience is how to study structural alterations in the brain. Even small changes in synaptic composition could have severe outcomes for body functions. Many neuropathological diseases are attributable to disorganization of particular synaptic proteins. Yet, to detect and comprehensively describe and evaluate such often rather subtle deviations from the normal physiological status in a detailed and quantitative manner is very challenging. Here, we have compared side-by-side several commercially available light microscopes for their suitability in visualizing synaptic components in larger parts of the brain at low resolution, at extended resolution as well as at super-resolution. Microscopic technologies included stereo, widefield, deconvolution, confocal, and super-resolution set-ups. We also analyzed the impact of adaptive optics, a motorized objective correction collar and CUDA graphics card technology on imaging quality and acquisition speed. Our observations evaluate a basic set of techniques, which allow for multi-color brain imaging from centimeter to nanometer scales. The comparative multi-modal strategy we established can be used as a guide for researchers to select the most appropriate light microscopy method in addressing specific questions in brain research, and we also give insights into recent developments such as optical aberration corrections

    Convex Relaxations for Particle-Gradient Flow with Applications in Super-Resolution Single-Molecule Localization Microscopy

    Get PDF
    Single-molecule localization microscopy (SMLM) techniques have become advanced bioanalytical tools by quantifying the positions and orientations of molecules in space and time at the nanoscale. With the noisy and heterogeneous nature of SMLM datasets in mind, we discuss leveraging particle-gradient flow 1) for quantifying the accuracy of localization algorithms with and without ground truth and 2) as a basis for novel, model-driven localization algorithms with empirically robust performance. Using experimental data, we demonstrate that overlapping images of molecules, a typical consequence of densely packed biological structures, cause biases in position estimates and reconstruction artifacts. To minimize such biases, we develop a novel sparse deconvolution algorithm by relaxing a particle-gradient flow algorithm (called relaxed-gradient flow or RGF). In contrast to previous methods based on sequential source matching or grid-based strategies, RGF detects source molecules based on the estimated “gradient flux.” RGF reconstructs experimental images of microtubules with much greater accuracy in terms of separation and diameter. We further extend RGF to the problem of joint estimation of molecular position and orientation. By lifting the optimization from first-order to second-order orientational moments, we derive an efficient version of RGF, which exhibits robustness to instrumental mismatches. Finally, we discuss the fundamental problem of quantifying the accuracy of a localization estimate without ground truth. We show that by computing measurement stability under a well-chosen perturbation with accurate knowledge of the imaging system, we can robustly quantify the confidence of individual localizations without ground-truth knowledge of the sample. To demonstrate the broad applicability of our method, termed Wasserstein-induced flux, we measure the accuracy of various reconstruction algorithms directly on experimental data

    Accelerated Wavelet-Regularized Deconvolution For 3-D Fluorescence Microcopy

    No full text
    Modern deconvolution algorithms are often specified as minimization problems involving a non-quadratic regularization functional. When the latter is a wavelet-domain l(1)-norm that favors sparse solutions, the problem can be solved by a simple iterative shrinkage/thresholding algorithm (ISTA). This approach provides state-of-the-art results in 2-D, but is harder to deploy in 3-D because of its slow convergence
    corecore