602 research outputs found

    Machine learning-accelerated gradient-based Markov Chain Monte Carlo inversion applied to electrical resistivity tomography

    Get PDF
    Expensive forward model evaluations and the curse of dimensionality usually hinder applications of Markov chain Monte Carlo algorithms to geophysical inverse problems. Another challenge of these methods is related to the definition of an appropriate proposal distribution that simultaneously should be inexpensive to manipulate and a good approximation of the posterior density. Here we present a gradient-based Markov chain Monte Carlo inversion algorithm that is applied to cast the electrical resistivity tomography into a probabilistic framework. The sampling is accelerated by exploiting the Hessian and gradient information of the negative log-posterior to define a proposal that is a local, Gaussian approximation of the target posterior probability. On the one hand, the computing time to run the many forward evaluations needed for both the data likelihood evaluation and the Hessian and gradient computation is decreased by training a residual neural network to predict the forward mapping between the resistivity model and the apparent resistivity value. On the other hand, the curse of dimensionality issue and the computational effort related to the Hessian and gradient manipulation are decreased by compressing data and model spaces through a discrete cosine transform. A non-parametric distribution is assumed as the prior probability density function. The method is first demonstrated on synthetic data and then applied to field measurements. The outcomes provided by the presented approach are also benchmarked against those obtained when a computationally expensive finite-element code is employed for forward modelling, with the results of a gradient-free Markov chain Monte Carlo inversion, and also compared with the predictions of a deterministic inversion. The implemented approach not only guarantees uncertainty assessments and model predictions comparable with those achieved by more standard inversion strategies, but also drastically decreases the computational cost of the probabilistic inversion, making it similar to that of a deterministic inversion

    Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators

    Get PDF
    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank.Comment: 16 pages, 3 figures. Matlab code included with the source file

    Semi-device-dependent blind quantum tomography

    Get PDF
    Extracting tomographic information about quantum states is a crucial task in the quest towards devising high-precision quantum devices. Current schemes typically require measurement devices for tomography that are a priori calibrated to a high precision. Ironically, the accuracy of the measurement calibration is fundamentally limited by the accuracy of state preparation, establishing a vicious cycle. Here, we prove that this cycle can be broken and the fundamental dependence on the measurement devices significantly relaxed. We show that exploiting the natural low-rank structure of quantum states of interest suffices to arrive at a highly scalable blind tomography scheme with a classically efficient post-processing algorithm. We further improve the efficiency of our scheme by making use of the sparse structure of the calibrations. This is achieved by relaxing the blind quantum tomography problem to the task of de-mixing a sparse sum of low-rank quantum states. Building on techniques from model-based compressed sensing, we prove that the proposed algorithm recovers a low-rank quantum state and the calibration provided that the measurement model exhibits a restricted isometry property. For generic measurements, we show that our algorithm requires a close-to-optimal number measurement settings for solving the blind tomography task. Complementing these conceptual and mathematical insights, we numerically demonstrate that blind quantum tomography is possible by exploiting low-rank assumptions in a practical setting inspired by an implementation of trapped ions using constrained alternating optimization.Comment: 22 pages, 8 Figure

    Electrical Resistance Tomography for sewage flow measurements

    Get PDF

    The development and application of a real-time electrical resistance tomography system.

    Get PDF
    Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2012.This dissertation focuses on the application of tomography in the sugar milling process, specifically within the vacuum pan. The research aims to improve the efficiency and throughput of a sugar mill by producing real-time images of the boiling dynamic in the pan and hence can be used as a diagnostic tool. The real-time tomography system is a combination of ruggedized data collecting hardware, a switching circuit and software algorithms. The system described in this dissertation uses 16 electrodes and estimates images based on the distinct differences in conductivities to be found in the vacuum pan, i.e. a conductive syrup-like fluid (massecuite) and bubbles. There is a direct correlation between the bubbles produced during the boiling process and heat transfer in the pan. From this correlation one can determine how well the pan is operating. The system has been developed in order to monitor specific parts of a pan for optimal boiling. A binary reconstructed image identifies either massecuite or water vapour. Each image is reconstructed using a modified neighbourhood data collection method and a back projection algorithm. The data collection and image reconstruction take place simultaneously, making it possible to generate images in real-time. Each image frame is reconstructed at approximately 1.1 frames per second. Most of the system was developed in LabVIEW, with some added external drive electronics, and functions seamlessly. The tomography system is LAN enabled hence measurements are initiated through a remote PC on the same network and the reconstructed images are streamed to the user. The laboratory results demonstrate that it is possible to generate tomographic images from bubbles vs massecuite, tap water and deionized water in real-time

    The use of the LANDSAT data collection system and imagery in reservoir management and operation

    Get PDF
    The author has identified the following significant results. An increase in the data collection system's (DCS) ability to function in the flood control mission with no additional manpower was demonstrated during the storms which struck New England during April and May of 1975 and August 1976. It was found that for this watershed, creditable flood hydrographs could be generated from DCS data. It was concluded that an ideal DCS for reservoir regulation would draw features from LANDSAT and GOES. MSS grayscale computer printout and a USGS topographic map were compared, yielding an optimum computer classification map of the wetland areas of the Merrimack River estuary. A classification accuracy of 75% was obtained for the wetlands unit, taking into account the misclassified and the unclassified pixels. The MSS band 7 grayscale printouts of the Franklin Falls reservoir showed good agreement to USGS topographic maps in total area of water depicted at the low water reservoir stage and at the maximum inundation level. Preliminary analysis of the LANDSAT digital data using the GISS computer algorithms showed that the radiance of snow cover/vegetation varied from approximately 20 mW/sq cm sr in nonvegetated areas to less than 4 mW/sq cm sr for densely covered forested area

    Efficient Sparse Coding in Early Sensory Processing: Lessons from Signal Recovery

    Get PDF
    Sensory representations are not only sparse, but often overcomplete: coding units significantly outnumber the input units. For models of neural coding this overcompleteness poses a computational challenge for shaping the signal processing channels as well as for using the large and sparse representations in an efficient way. We argue that higher level overcompleteness becomes computationally tractable by imposing sparsity on synaptic activity and we also show that such structural sparsity can be facilitated by statistics based decomposition of the stimuli into typical and atypical parts prior to sparse coding. Typical parts represent large-scale correlations, thus they can be significantly compressed. Atypical parts, on the other hand, represent local features and are the subjects of actual sparse coding. When applied on natural images, our decomposition based sparse coding model can efficiently form overcomplete codes and both center-surround and oriented filters are obtained similar to those observed in the retina and the primary visual cortex, respectively. Therefore we hypothesize that the proposed computational architecture can be seen as a coherent functional model of the first stages of sensory coding in early vision

    Real-time quality visualization of medical models on commodity and mobile devices

    Get PDF
    This thesis concerns the specific field of visualization of medical models using commodity and mobile devices. Mechanisms for medical imaging acquisition such as MRI, CT, and micro-CT scanners are continuously evolving, up to the point of obtaining volume datasets of large resolutions (> 512^3). As these datasets grow in resolution, its treatment and visualization become more and more expensive due to their computational requirements. For this reason, special techniques such as data pre-processing (filtering, construction of multi-resolution structures, etc.) and sophisticated algorithms have to be introduced in different points of the visualization pipeline to achieve the best visual quality without compromising performance times. The problem of managing big datasets comes from the fact that we have limited computational resources. Not long ago, the only physicians that were rendering volumes were radiologists. Nowadays, the outcome of diagnosis is the data itself, and medical doctors need to render them in commodity PCs (even patients may want to render the data, and the DVDs are commonly accompanied with a DICOM viewer software). Furthermore, with the increasing use of technology in daily clinical tasks, small devices such as mobile phones and tablets can fit the needs of medical doctors in some specific areas. Visualizing diagnosis images of patients becomes more challenging when it comes to using these devices instead of desktop computers, as they generally have more restrictive hardware specifications. The goal of this Ph.D. thesis is the real-time, quality visualization of medium to large medical volume datasets (resolutions >= 512^3 voxels) on mobile phones and commodity devices. To address this problem, we use multiresolution techniques that apply downsampling techniques on the full resolution datasets to produce coarser representations which are easier to handle. We have focused our efforts on the application of Volume Visualization in the clinical practice, so we have a particular interest in creating solutions that require short pre-processing times that quickly provide the specialists with the data outcome, maximize the preservation of features and the visual quality of the final images, achieve high frame rates that allow interactive visualizations, and make efficient use of the computational resources. The contributions achieved during this thesis comprise improvements in several stages of the visualization pipeline. The techniques we propose are located in the stages of multi-resolution generation, transfer function design and the GPU ray casting algorithm itself.Esta tesis se centra en la visualización de modelos médicos de volumen en dispositivos móviles y de bajas prestaciones. Los sistemas médicos de captación tales como escáners MRI, CT y micro-CT, están en constante evolución, hasta el punto de obtener modelos de volumen de gran resolución (> 512^3). A medida que estos datos crecen en resolución, su manejo y visualización se vuelve más y más costoso debido a sus requisitos computacionales. Por este motivo, técnicas especiales como el pre-proceso de datos (filtrado, construcción de estructuras multiresolución, etc.) y algoritmos específicos se tienen que introducir en diferentes puntos de la pipeline de visualización para conseguir la mejor calidad visual posible sin comprometer el rendimiento. El problema que supone manejar grandes volumenes de datos es debido a que tenemos recursos computacionales limitados. Hace no mucho, las únicas personas en el ámbito médico que visualizaban datos de volumen eran los radiólogos. Hoy en día, el resultado de la diagnosis son los datos en sí, y los médicos necesitan renderizar estos datos en PCs de características modestas (incluso los pacientes pueden querer visualizar estos datos, pues los DVDs con los resultados suelen venir acompañados de un visor de imágenes DICOM). Además, con el reciente aumento del uso de las tecnologías en la clínica práctica habitual, dispositivos pequeños como teléfonos móviles o tablets son los más convenientes en algunos casos. La visualización de volumen es más difícil en este tipo de dispositivos que en equipos de sobremesa, pues las limitaciones de su hardware son superiores. El objetivo de esta tesis doctoral es la visualización de calidad en tiempo real de modelos grandes de volumen (resoluciones >= 512^3 voxels) en teléfonos móviles y dispositivos de bajas prestaciones. Para enfrentarnos a este problema, utilizamos técnicas multiresolución que aplican técnicas de reducción de datos a los modelos en resolución original, para así obtener modelos de menor resolución. Hemos centrado nuestros esfuerzos en la aplicación de la visualización de volumen en la práctica clínica, así que tenemos especial interés en diseñar soluciones que requieran cortos tiempos de pre-proceso para que los especialistas tengan rápidamente los resultados a su disposición. También, queremos maximizar la conservación de detalles de interés y la calidad de las imágenes finales, conseguir frame rates altos que faciliten visualizaciones interactivas y que hagan un uso eficiente de los recursos computacionales. Las contribuciones aportadas por esta tesis són mejoras en varias etapas de la pipeline de visualización. Las técnicas que proponemos se situan en las etapas de generación de la estructura multiresolución, el diseño de la función de transferencia y el algoritmo de ray casting en la GPU.Postprint (published version
    corecore