64 research outputs found

    Parallel bio-inspired methods for model optimization and pattern recognition

    Get PDF
    Nature based computational models are usually inherently parallel. The collaborative intelligence in those models emerges from the simultaneous instruction processing by simple independent units (neurons, ants, swarm members, etc...). This dissertation investigates the benefits of such parallel models in terms of efficiency and accuracy. First, the viability of a parallel implementation of bio-inspired metaheuristics for function optimization on consumer-level graphic cards is studied in detail. Then, in an effort to expose those parallel methods to the research community, the metaheuristic implementations were abstracted and grouped in an open source parameter/function optimization library libCudaOptimize. The library was verified against a well known benchmark for mathematical function minimization, and showed significant gains in both execution time and minimization accuracy. Crossing more into the application side, a parallel model of the human neocortex was developed. This model is able to detect, classify, and predict patterns in time-series data in an unsupervised way. Finally, libCudaOptimize was used to find the best parameters for this neocortex model, adapting it to gesture recognition within publicly available datasets

    Using Automatic Differentiation as a General Framework for Ptychographic Reconstruction

    Get PDF
    Coherent diffraction imaging methods enable imaging beyond lens-imposed resolution limits. In these methods, the object can be recovered by minimizing an error metric that quantifies the difference between diffraction patterns as observed, and those calculated from a present guess of the object. Efficient minimization methods require analytical calculation of the derivatives of the error metric, which is not always straightforward. This limits our ability to explore variations of basic imaging approaches. In this paper, we propose to substitute analytical derivative expressions with the automatic differentiation method, whereby we can achieve object reconstruction by specifying only the physics-based experimental forward model. We demonstrate the generality of the proposed method through straightforward object reconstruction for a variety of complex ptychographic experimental models.Comment: 23 pages (including references and supplemental material), 19 externally generated figure file

    Real-time sparse-sampled Ptychographic imaging through deep neural networks

    Full text link
    Ptychography has rapidly grown in the fields of X-ray and electron imaging for its unprecedented ability to achieve nano or atomic scale resolution while simultaneously retrieving chemical or magnetic information from a sample. A ptychographic reconstruction is achieved by means of solving a complex inverse problem that imposes constraints both on the acquisition and on the analysis of the data, which typically precludes real-time imaging due to computational cost involved in solving this inverse problem. In this work we propose PtychoNN, a novel approach to solve the ptychography reconstruction problem based on deep convolutional neural networks. We demonstrate how the proposed method can be used to predict real-space structure and phase at each scan point solely from the corresponding far-field diffraction data. The presented results demonstrate how PtychoNN can effectively be used on experimental data, being able to generate high quality reconstructions of a sample up to hundreds of times faster than state-of-the-art ptychography reconstruction solutions once trained. By surpassing the typical constraints of iterative model-based methods, we can significantly relax the data acquisition sampling conditions and produce equally satisfactory reconstructions. Besides drastically accelerating acquisition and analysis, this capability can enable new imaging scenarios that were not possible before, in cases of dose sensitive, dynamic and extremely voluminous samples

    Differentiable Simulation of a Liquid Argon Time Projection Chamber

    Full text link
    Liquid argon time projection chambers (LArTPCs) are widely used in particle detection for their tracking and calorimetric capabilities. The particle physics community actively builds and improves high-quality simulators for such detectors in order to develop physics analyses in a realistic setting. The fidelity of these simulators relative to real, measured data is limited by the modeling of the physical detectors used for data collection. This modeling can be improved by performing dedicated calibration measurements. Conventional approaches calibrate individual detector parameters or processes one at a time. However, the impact of detector processes is entangled, making this a poor description of the underlying physics. We introduce a differentiable simulator that enables a gradient-based optimization, allowing for the first time a simultaneous calibration of all detector parameters. We describe the procedure of making a differentiable simulator, highlighting the challenges of retaining the physics quality of the standard, non-differentiable version while providing meaningful gradient information. We further discuss the advantages and drawbacks of using our differentiable simulator for calibration. Finally, we provide a starting point for extensions to our approach, including applications of the differentiable simulator to physics analysis pipelines

    Heterogeneous reconstruction of deformable atomic models in Cryo-EM

    Full text link
    Cryogenic electron microscopy (cryo-EM) provides a unique opportunity to study the structural heterogeneity of biomolecules. Being able to explain this heterogeneity with atomic models would help our understanding of their functional mechanisms but the size and ruggedness of the structural space (the space of atomic 3D cartesian coordinates) presents an immense challenge. Here, we describe a heterogeneous reconstruction method based on an atomistic representation whose deformation is reduced to a handful of collective motions through normal mode analysis. Our implementation uses an autoencoder. The encoder jointly estimates the amplitude of motion along the normal modes and the 2D shift between the center of the image and the center of the molecule . The physics-based decoder aggregates a representation of the heterogeneity readily interpretable at the atomic level. We illustrate our method on 3 synthetic datasets corresponding to different distributions along a simulated trajectory of adenylate kinase transitioning from its open to its closed structures. We show for each distribution that our approach is able to recapitulate the intermediate atomic models with atomic-level accuracy.Comment: 8 pages, 1 figur
    • …
    corecore