133 research outputs found

    Iterative Solvers for Physics-based Simulations and Displays

    Full text link
    La génération d’images et de simulations réalistes requiert des modèles complexes pour capturer tous les détails d’un phénomène physique. Les équations mathématiques qui composent ces modèles sont compliquées et ne peuvent pas être résolues analytiquement. Des procédures numériques doivent donc être employées pour obtenir des solutions approximatives à ces modèles. Ces procédures sont souvent des algorithmes itératifs, qui calculent une suite convergente vers la solution désirée à partir d’un essai initial. Ces méthodes sont une façon pratique et efficace de calculer des solutions à des systèmes complexes, et sont au coeur de la plupart des méthodes de simulation modernes. Dans cette thèse par article, nous présentons trois projets où les algorithmes itératifs jouent un rôle majeur dans une méthode de simulation ou de rendu. Premièrement, nous présentons une méthode pour améliorer la qualité visuelle de simulations fluides. En créant une surface de haute résolution autour d’une simulation existante, stabilisée par une méthode itérative, nous ajoutons des détails additionels à la simulation. Deuxièmement, nous décrivons une méthode de simulation fluide basée sur la réduction de modèle. En construisant une nouvelle base de champ de vecteurs pour représenter la vélocité d’un fluide, nous obtenons une méthode spécifiquement adaptée pour améliorer les composantes itératives de la simulation. Finalement, nous présentons un algorithme pour générer des images de haute qualité sur des écrans multicouches dans un contexte de réalité virtuelle. Présenter des images sur plusieurs couches demande des calculs additionels à coût élevé, mais nous formulons le problème de décomposition des images afin de le résoudre efficacement avec une méthode itérative simple.Realistic computer-generated images and simulations require complex models to properly capture the many subtle behaviors of each physical phenomenon. The mathematical equations underlying these models are complicated, and cannot be solved analytically. Numerical procedures must thus be used to obtain approximate solutions. These procedures are often iterative algorithms, where an initial guess is progressively improved to converge to a desired solution. Iterative methods are a convenient and efficient way to compute solutions to complex systems, and are at the core of most modern simulation methods. In this thesis by publication, we present three papers where iterative algorithms play a major role in a simulation or rendering method. First, we propose a method to improve the visual quality of fluid simulations. By creating a high-resolution surface representation around an input fluid simulation, stabilized with iterative methods, we introduce additional details atop of the simulation. Second, we describe a method to compute fluid simulations using model reduction. We design a novel vector field basis to represent fluid velocity, creating a method specifically tailored to improve all iterative components of the simulation. Finally, we present an algorithm to compute high-quality images for multifocal displays in a virtual reality context. Displaying images on multiple display layers incurs significant additional costs, but we formulate the image decomposition problem so as to allow an efficient solution using a simple iterative algorithm

    A convolutional autoencoder approach for mining features in cellular electron cryo-tomograms and weakly supervised coarse segmentation

    Full text link
    Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.Comment: Accepted by Journal of Structural Biolog

    Incorporating Fresnel-Propagation into Electron Holographic Tomography: A possible way towards three-dimensional atomic resolution

    Get PDF
    Tomographic electron holography combines tomography, the reconstruction of three-dimensionally resolved data from multiple measurements with different specimen orientations, with electron holography, an interferometrical method for measuring the complex wave function inside a transmission electron microscope (TEM). Due to multiple scattering and free wave propagation conventional, ray projection based, tomography does perform badly when approaching atomic resolution. This is remedied by incorporating propagation effects into the projection while maintaining linearity in the object potential. Using the Rytov approach an approximation is derived, where the logarithm of the complex wave is linear in the potential. The ray projection becomes a convolution with a Fresnel propagation kernel, which is considerably more computationally expensive. A framework for such calculations has been implemented in Python. So has a multislice electron scattering algorithm, optimised for large fields of view and high numbers of atoms for simulations of scattering at nanoparticles. The Rytov approximation gives a remarkable increase in resolution and signal quality over the conventional approach in the tested system of a tungsten disulfide nanotube. The response to noise seems to be similar as in conventional tomography, so rather benign. This comes at the downside of much longer calculation time per iteration.Tomographische Elektronenholographie kombiniert Tomographie, die Rekonstruktion dreidimensional aufgelößter Daten aus einem Satz von mehreren Messungen bei verschiedenen Objektorientierungen, mit Elektronenholographie, eine interferrometrische Messung der komplexen Elektronenwelle im Transmissionselektronenmikroskop (TEM). Wegen Mehrfachstreuung und Propagationseffekten erzeugt konventionelle, auf einer Strahlprojektion basierende, Tomography ernste Probleme bei Hochauflösung hin zu atomarer Auflösung. Diese sollen durch ein Modell, welches Fresnel-Propagation beinhaltet, aber weiterhin linear im Potential des Objektes ist, vermindert werden. Mit dem Rytov-Ansatz wird eine Näherung abgeleitet, wobei der Logarithmus der komplexen Welle linear im Potential ist. Die Strahlen-Projektion ist dann eine Faltung mit dem Fresnel-Propagations-Faltungskernel welche rechentechnisch wesentlich aufwendiger ist. Ein Programm-Paket für solche Rechnungen wurde in Python implementiert. Weiterhin wurde ein Multislice Algorithmus für große Gesichtsfelder und Objekte mit vielen Atomen wie Nanopartikel optimiert. Die Rytov-Näherung verbessert sowohl die Auflösung als auch die Signalqualität immens gegenüber konventioneller Tomographie, zumindest in dem getesteten System eines Wolframdisulfid-Nanoröhrchens. Das Rauschverhalten scheint ähnlich der konventionallen Tomographie zu sein, also eher gutmütig. Im Gegenzug braucht die Tomographie basierend auf der Rytov-Näherung wesentlich mehr Rechenzeit pro Iteration

    Quantitative Image Simulation and Analysis of Nanoparticles

    Get PDF

    New computational methods toward atomic resolution in single particle cryo-electron microscopy

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura: 22-06-2016Structural information of macromolecular complexes provides key insights into the way they carry out their biological functions. In turn, Electron microscopy (EM) is an essential tool to study the structure and function of biological macromolecules at a medium-high resolution. In this context, Single-Particle Analysis (SPA), as an EM modality, is able to yield Three-Dimensional (3-D) structural information for large biological complexes at near atomic resolution by combining many thousands of projection images. However, these views su er from low Signal-to-Noise Ratios (SNRs), since an extremely low total electron dose is used during exposure to reduce radiation damage and preserve the functional structure of macromolecules. In recent years, the emergence of Direct Detection Devices (DDDs) has opened up the possibility of obtaining images with higher SNRs. These detectors provide a set of frames instead of just one micrograph, which makes it possible to study the behavior of frozen hydrated specimens as a function of electron dose and rate. In this way, it has become apparent that biological specimens embedded in a solid matrix of amorphous ice move during imaging, resulting in Beam-Induced Motion (BIM). Therefore, alignment of frames should be added to the classical standard data processing work ow of single-particle reconstruction, which includes: particle selection, particle alignment, particle classi cation, 3-D reconstruction, and model re nement. In this thesis, we propose new algorithms and improvements for three important steps of this work ow: movie alignment, particles selection, and 3-D reconstruction. For movie alignment, a methodology based on a robust to noise optical ow approach is proposed that can e ciently correct for local movements and provide quantitative analysis of the BIM pattern. We then introduce a method for automatic particle selection in micrographs that uses some new image features to train two classi ers to learn from the user the kind of particles he is interested in. Finally, for 3-D reconstruction, we introduce a gridding-based direct Fourier method that uses a weighting technique to compute a uniform sampled Fourier transform. The algorithms are fully implemented in the open-source Xmipp package (http://xmipp.cnb.csic.es

    Computer vision and optimization methods applied to the measurements of in-plane deformations

    Get PDF
    fi=vertaisarvioitu|en=peerReviewed

    Novel computational methods for in vitro and in situ cryo-electron microscopy

    Get PDF
    Over the past decade, advances in microscope hardware and image data processing algorithms have made cryo-electron microscopy (cryo-EM) a dominant technique for protein structure determination. Near-atomic resolution can now be obtained for many challenging in vitro samples using single-particle analysis (SPA), while sub-tomogram averaging (STA) can obtain sub-nanometer resolution for large protein complexes in a crowded cellular environment. Reaching high resolution requires large amounts of im-age data. Modern transmission electron microscopes (TEMs) automate the acquisition process and can acquire thousands of micrographs or hundreds of tomographic tilt se-ries over several days without intervention. In a first step, the data must be pre-processed: Micrographs acquired as movies are cor-rected for stage and beam-induced motion. For tilt series, additional alignment of all micrographs in 3D is performed using gold- or patch-based fiducials. Parameters of the contrast-transfer function (CTF) are estimated to enable its reversal during SPA refine-ment. Finally, individual protein particles must be located and extracted from the aligned micrographs. Current pre-processing algorithms, especially those for particle picking, are not robust enough to enable fully unsupervised operation. Thus, pre-processing is start-ed after data collection, and takes several days due to the amount of supervision re-quired. Pre-processing the data in parallel to acquisition with more robust algorithms would save time and allow to discover bad samples and microscope settings early on. Warp is a new software for cryo-EM data pre-processing. It implements new algorithms for motion correction, CTF estimation, tomogram reconstruction, as well as deep learn-ing-based approaches to particle picking and image denoising. The algorithms are more accurate and robust, enabling unsupervised operation. Warp integrates all pre-processing steps into a pipeline that is executed on-the-fly during data collection. Inte-grated with SPA tools, the pipeline can produce 2D and 3D classes less than an hour into data collection for favorable samples. Here I describe the implementation of the new algorithms, and evaluate them on various movie and tilt series data sets. I show that un-supervised pre-processing of a tilted influenza hemagglutinin trimer sample with Warp and refinement in cryoSPARC can improve previously published resolution from 3.9 Å to 3.2 Å. Warp’s algorithms operate in a reference-free manner to improve the image resolution at the pre-processing stage when no high-resolution maps are available for the particles yet. Once 3D maps have been refined, they can be used to go back to the raw data and perform reference-based refinement of sample motion and CTF in movies and tilt series. M is a new tool I developed to solve this task in a multi-particle framework. Instead of following the SPA assumption that every particle is single and independent, M models all particles in a field of view as parts of a large, physically connected multi-particle system. This allows M to optimize hyper-parameters of the system, such as sample motion and deformation, or higher-order aberrations in the CTF. Because M models these effects accurately and optimizes all hyper-parameters simultaneously with particle alignments, it can surpass previous reference-based frame and tilt series alignment tools. Here I de-scribe the implementation of M, evaluate it on several data sets, and demonstrate that the new algorithms achieve equally high resolution with movie and tilt series data of the same sample. Most strikingly, the combination of Warp, RELION and M can resolve 70S ribosomes bound to an antibiotic at 3.5 Å inside vitrified Mycoplasma pneumoniae cells, marking a major advance in resolution for in situ imaging

    ROBUST DEEP LEARNING METHODS FOR SOLVING INVERSE PROBLEMS IN MEDICAL IMAGING

    Get PDF
    The medical imaging field has a long history of incorporating machine learning algorithms to address inverse problems in image acquisition and analysis. With the impressive successes of deep neural networks on natural images, we seek to answer the obvious question: do these successes also transfer to the medical image domain? The answer may seem straightforward on the surface. Tasks like image-to-image transformation, segmentation, detection, etc., have direct applications for medical images. For example, metal artifact reduction for Computed Tomography (CT) and reconstruction from undersampled k-space signal for Magnetic Resonance (MR) imaging can be formulated as an image-to-image transformation; lesion/tumor detection and segmentation are obvious applications for higher level vision tasks. While these tasks may be similar in formulation, many practical constraints and requirements exist in solving these tasks for medical images. Patient data is highly sensitive and usually only accessible from individual institutions. This creates constraints on the available groundtruth, dataset size, and computational resources in these institutions to train performant models. Due to the mission-critical nature in healthcare applications, requirements such as performance robustness and speed are also stringent. As such, the big-data, dense-computation, supervised learning paradigm in mainstream deep learning is often insufficient to address these situations. In this dissertation, we investigate ways to benefit from the powerful representational capacity of deep neural networks while still satisfying the above-mentioned constraints and requirements. The first part of this dissertation focuses on adapting supervised learning to account for variations such as different medical image modality, image quality, architecture designs, tasks, etc. The second part of this dissertation focuses on improving model robustness on unseen data through domain adaptation, which ameliorates performance degradation due to distribution shifts. The last part of this dissertation focuses on self-supervised learning and learning from synthetic data with a focus in tomographic imaging; this is essential in many situations where the desired groundtruth may not be accessible
    • …
    corecore