86 research outputs found

    Supporting Quantitative Visual Analysis in Medicine and Biology in the Presence of Data Uncertainty

    Full text link

    Background-Source separation in astronomical images with Bayesian Probability Theory

    Get PDF

    A multiscale approach to state estimation with applications in process operability analysis and model predictive control

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2000.Includes bibliographical references.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.This thesis explores the application of multiscale ideas to the areas of state estimation and control. The work represents a significant departure from the traditional representations in the time and frequency domains, and provides a novel framework that leads to fast, efficient, and modular estimation algorithms. Multiscale methods were rediscovered through wavelet theory in the mid-eighties, as a tool for the geophysics community. Like Fourier theory, it provides a more instructive representation of data than time series alone, by decomposition into a different set of orthonormal basis functions. Multiscale models and data sets exist on multiscale trees of nodes. Each node represents a place holder corresponding to a time point in a time series. The nodes of a tree form a structure which may contain measurements, states, inputs, outputs, and uncertainties. Each level of the tree represents the set of data at a given level of resolution. This dual localization in time and frequency has benefits in the storage of information, since irrelevant data and pure noise can be identified and discarded. It also preserves time and frequency information in a way that Fourier theory cannot. Grouping and condensing of important information follows naturally, which facilitates the making of decisions at a level of detail relevant to the question being asked. Multiscale systems theory is a general approach for multiscale model construction on a tree. This thesis derives the multiscale models corresponding to the Haar transform, which produces a modified hat transform for input data. Autoregressive models, commonly used in time series analysis, give rise to multiscale models on the tree. These allow us to construct numerical algorithms that are effcient and parallelizable, and scale logarithmically with the number of data points, rather than the linear performance typical for similar time-series algorithms. This multiscale systems theory generalizes easily to other wavelet bases. Multiscale models of the underlying physics and the measurement model can be combined to construct a cost function which estimates the underlying physical states from a set of measurements. The resulting set of normal equations is sparse and contains a specialized structure, leading to a highly efficient solution strategy. A modified multiscale state estimation algorithm incorporates prior estimates, consistent with the Kalman filter, with which it is linked. A constrained multiscale state estimator incorporates constraints in the states, and in linear combinations of the states. All incarnations of the multiscale state estimator provide a framework for the optimal fusion of multiple sets of measurements, including those taken at different levels of resolution. This is particularly useful in estimation and control problems where measurement data and control strategies occur at multiple rates. The arbitrary size of the state allows for the use of higher order underlying physical models, without modification of the estimation algorithm. Finally, the algorithm accommodates an arbitrary specification of the uncertainty estimates at any combination of time points or level of resolution. The structure of the solution algorithm is sufficiently flexible to use the same intermediate variables for all of these modifications, leading to considerable reusability, both of code, and of prior calculations. Thus, the multiscale state estimation algorithm is modular and parallelizable. An uncertainty analysis of the algorithm represents state estimation error in terms of the underlying model and measurement uncertainties. Depending on the size of the problem, different techniques should be used to construct the probability distribution functions of the error estimates. This thesis demonstrates direct integration, propagation of the moments of the measurement and model errors, polynomial chaos expansions, and an approximation using Gaussian quadrature and Monte Carlo simulation. A sample of smaller case studies shows the range of uses of the algorithm. Three larger case studies demonstrate the multiscale state estimator in realistic chemical engineering examples. The terephthalic acid plant case study successfully incorporates a non-linear model of the first continuously stirred tank reactor into the multiscale state estimator. The paper-rolling case study compares the multiscale state estimator to the Karhunen-Loeve transform as a means of state estimation. Finally, the heavy oil fractionator of the Shell Control Problem demonstrates the multiscale state estimator in a control setting.by Matthew Simon Dyer.Ph.D

    ISCR Annual Report: Fical Year 2004

    Full text link

    Strategies for neural networks in ballistocardiography with a view towards hardware implementation

    Get PDF
    A thesis submitted for the degree of Doctor of Philosophy at the University of LutonThe work described in this thesis is based on the results of a clinical trial conducted by the research team at the Medical Informatics Unit of the University of Cambridge, which show that the Ballistocardiogram (BCG) has prognostic value in detecting impaired left ventricular function before it becomes clinically overt as myocardial infarction leading to sudden death. The objective of this study is to develop and demonstrate a framework for realising an on-line BCG signal classification model in a portable device that would have the potential to find pathological signs as early as possible for home health care. Two new on-line automatic BeG classification models for time domain BeG classification are proposed. Both systems are based on a two stage process: input feature extraction followed by a neural classifier. One system uses a principal component analysis neural network, and the other a discrete wavelet transform, to reduce the input dimensionality. Results of the classification, dimensionality reduction, and comparison are presented. It is indicated that the combined wavelet transform and MLP system has a more reliable performance than the combined neural networks system, in situations where the data available to determine the network parameters is limited. Moreover, the wavelet transfonn requires no prior knowledge of the statistical distribution of data samples and the computation complexity and training time are reduced. Overall, a methodology for realising an automatic BeG classification system for a portable instrument is presented. A fully paralJel neural network design for a low cost platform using field programmable gate arrays (Xilinx's XC4000 series) is explored. This addresses the potential speed requirements in the biomedical signal processing field. It also demonstrates a flexible hardware design approach so that an instrument's parameters can be updated as data expands with time. To reduce the hardware design complexity and to increase the system performance, a hybrid learning algorithm using random optimisation and the backpropagation rule is developed to achieve an efficient weight update mechanism in low weight precision learning. The simulation results show that the hybrid learning algorithm is effective in solving the network paralysis problem and the convergence is much faster than by the standard backpropagation rule. The hidden and output layer nodes have been mapped on Xilinx FPGAs with automatic placement and routing tools. The static time analysis results suggests that the proposed network implementation could generate 2.7 billion connections per second performance

    General Dynamic Surface Reconstruction: Application to the 3D Segmentation of the Left Ventricle

    Get PDF
    Aquesta tesi descriu la nostra contribució a la reconstrucció tridimensional de les superfícies interna i externa del ventricle esquerre humà. La reconstrucció és un primer procés dins d'una aplicació global de Realitat Virtual dissenyada com una important eina de diagnòstic per a hospitals. L'aplicació parteix de la reconstrucció de les superfícies i proveeix a l'expert de manipulació interactiva del model en temps real, a més de càlculs de volums i de altres paràmetres d'interès. El procés de recuperació de les superfícies es caracteritza per la seva velocitat de convergència, la suavitat a les malles finals i la precisió respecte de les dades recuperades. Donat que el diagnòstic de patologies cardíaques requereix d'experiència, temps i molt coneixement professional, la simulació és un procés clau que millora la eficiència.Els nostres algorismes i implementacions han estat aplicats a dades sintètiques i reals amb diferències relatives a la quantitat de dades inexistents, casuístiques presents a casos patològics i anormals. Els conjunts de dades inclouen adquisicions d'instants concrets i de cicles cardíacs complets. La bondat del sistema de reconstrucció ha estat avaluada mitjançant paràmetres mèdics per a poder comparar els nostres resultats finals amb aquells derivats a partir de programari típic utilitzat pels professionals de la medicina.A més de l'aplicació directa al diagnòstic mèdic, la nostra metodologia permet reconstruccions de tipus genèric en el camp dels Gràfics 3D per ordinador. Les nostres reconstruccions permeten generar models tridimensionals amb un baix cost en quant a la interacció manual necessària i a la càrrega computacional associada. Altrament, el nostre mètode pot entendre's com un robust algorisme de triangularització que construeix superfícies partint de núvols de punts que poden obtenir-se d'escàners làser o sensors magnètics, per exemple.Esta tesis describe nuestra contribución a la reconstrucción tridimensional de las superficies interna y externa del ventrículo izquierdo humano. La reconstrucción es un primer proceso que forma parte de una aplicación global de Realidad Virtual diseñada como una importante herramienta de diagnóstico para hospitales. La aplicación parte de la reconstrucción de las superficies y provee al experto de manipulación interactiva del modelo en tiempo real, además de cálculos de volúmenes y de otros parámetros de interés. El proceso de recuperación de las superficies se caracteriza por su velocidad de convergencia, la suavidad en las mallas finales y la precisión respecto de los datos recuperados. Dado que el diagnóstico de patologías cardíacas requiere experiencia, tiempo y mucho conocimiento profesional, la simulación es un proceso clave que mejora la eficiencia.Nuestros algoritmos e implementaciones han sido aplicados a datos sintéticos y reales con diferencias en cuanto a la cantidad de datos inexistentes, casuística presente en casos patológicos y anormales. Los conjuntos de datos incluyen adquisiciones de instantes concretos y de ciclos cardíacos completos. La bondad del sistema de reconstrucción ha sido evaluada mediante parámetros médicos para poder comparar nuestros resultados finales con aquellos derivados a partir de programario típico utilizado por los profesionales de la medicina.Además de la aplicación directa al diagnóstico médico, nuestra metodología permite reconstrucciones de tipo genérico en el campo de los Gráficos 3D por ordenador. Nuestras reconstrucciones permiten generar modelos tridimensionales con un bajo coste en cuanto a la interacción manual necesaria y a la carga computacional asociada. Por otra parte, nuestro método puede entenderse como un robusto algoritmo de triangularización que construye superficies a partir de nubes de puntos que pueden obtenerse a partir de escáneres láser o sensores magnéticos, por ejemplo.This thesis describes a contribution to the three-dimensional reconstruction of the internal and external surfaces of the human's left ventricle. The reconstruction is a first process fitting in a complete VR application that will serve as an important diagnosis tool for hospitals. Beginning with the surfaces reconstruction, the application will provide volume and interactive real-time manipulation to the model. We focus on speed, precision and smoothness for the final surfaces. As long as heart diseases diagnosis requires experience, time and professional knowledge, simulation is a key-process that enlarges efficiency.The algorithms and implementations have been applied to both synthetic and real datasets with differences regarding missing data, present in cases where pathologies and abnormalities arise. The datasets include single acquisitions and complete cardiac cycles. The goodness of the reconstructions has been evaluated with medical parameters in order to compare our results with those retrieved by typical software used by physicians.Besides the direct application to medicine diagnosis, our methodology is suitable for generic reconstructions in the field of computer graphics. Our reconstructions can serve for getting 3D models at low cost, in terms of manual interaction and CPU computation overhead. Furthermore, our method is a robust tessellation algorithm that builds surfaces from clouds of points that can be retrieved from laser scanners or magnetic sensors, among other available hardware

    Computational imaging and automated identification for aqueous environments

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2011Sampling the vast volumes of the ocean requires tools capable of observing from a distance while retaining detail necessary for biology and ecology, ideal for optical methods. Algorithms that work with existing SeaBED AUV imagery are developed, including habitat classi fication with bag-of-words models and multi-stage boosting for rock sh detection. Methods for extracting images of sh from videos of longline operations are demonstrated. A prototype digital holographic imaging device is designed and tested for quantitative in situ microscale imaging. Theory to support the device is developed, including particle noise and the effects of motion. A Wigner-domain model provides optimal settings and optical limits for spherical and planar holographic references. Algorithms to extract the information from real-world digital holograms are created. Focus metrics are discussed, including a novel focus detector using local Zernike moments. Two methods for estimating lateral positions of objects in holograms without reconstruction are presented by extending a summation kernel to spherical references and using a local frequency signature from a Riesz transform. A new metric for quickly estimating object depths without reconstruction is proposed and tested. An example application, quantifying oil droplet size distributions in an underwater plume, demonstrates the efficacy of the prototype and algorithms.Funding was provided by NOAA Grant #5710002014, NOAA NMFS Grant #NA17RJ1223, NSF Grant #OCE-0925284, and NOAA Grant #NA10OAR417008
    corecore