86 research outputs found

    Regularisation methods for imaging from electrical measurements

    Get PDF
    In Electrical Impedance Tomography the conductivity of an object is estimated from boundary measurements. An array of electrodes is attached to the surface of the object and current stimuli are applied via these electrodes. The resulting voltages are measured. The process of estimating the conductivity as a function of space inside the object from voltage measurements at the surface is called reconstruction. Mathematically the ElT reconstruction is a non linear inverse problem, the stable solution of which requires regularisation methods. Most common regularisation methods impose that the reconstructed image should be smooth. Such methods confer stability to the reconstruction process, but limit the capability of describing sharp variations in the sought parameter. In this thesis two new methods of regularisation are proposed. The first method, Gallssian anisotropic regularisation, enhances the reconstruction of sharp conductivity changes occurring at the interface between a contrasting object and the background. As such changes are step changes, reconstruction with traditional smoothing regularisation techniques is unsatisfactory. The Gaussian anisotropic filtering works by incorporating prior structural information. The approximate knowledge of the shapes of contrasts allows us to relax the smoothness in the direction normal to the expected boundary. The construction of Gaussian regularisation filters that express such directional properties on the basis of the structural information is discussed, and the results of numerical experiments are analysed. The method gives good results when the actual conductivity distribution is in accordance with the prior information. When the conductivity distribution violates the prior information the method is still capable of properly locating the regions of contrast. The second part of the thesis is concerned with regularisation via the total variation functional. This functional allows the reconstruction of discontinuous parameters. The properties of the functional are briefly introduced, and an application in inverse problems in image denoising is shown. As the functional is non-differentiable, numerical difficulties are encountered in its use. The aim is therefore to propose an efficient numerical implementation for application in ElT. Several well known optimisation methods arc analysed, as possible candidates, by theoretical considerations and by numerical experiments. Such methods are shown to be inefficient. The application of recent optimisation methods called primal- dual interior point methods is analysed be theoretical considerations and by numerical experiments, and an efficient and stable algorithm is developed. Numerical experiments demonstrate the capability of the algorithm in reconstructing sharp conductivity profiles

    Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    Get PDF
    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics

    Nonlinear probabilistic estimation of 3-D geometry from images

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (p. 159-164).by Ali Jerome Azarbayejani.Ph.D

    Glosarium Matematika

    Get PDF
    273 p.; 24 cm

    Glosarium Matematika

    Get PDF

    GLOTTAL EXCITATION EXTRACTION OF VOICED SPEECH - JOINTLY PARAMETRIC AND NONPARAMETRIC APPROACHES

    Get PDF
    The goal of this dissertation is to develop methods to recover glottal flow pulses, which contain biometrical information about the speaker. The excitation information estimated from an observed speech utterance is modeled as the source of an inverse problem. Windowed linear prediction analysis and inverse filtering are first used to deconvolve the speech signal to obtain a rough estimate of glottal flow pulses. Linear prediction and its inverse filtering can largely eliminate the vocal-tract response which is usually modeled as infinite impulse response filter. Some remaining vocal-tract components that reside in the estimate after inverse filtering are next removed by maximum-phase and minimum-phase decomposition which is implemented by applying the complex cepstrum to the initial estimate of the glottal pulses. The additive and residual errors from inverse filtering can be suppressed by higher-order statistics which is the method used to calculate cepstrum representations. Some features directly provided by the glottal source\u27s cepstrum representation as well as fitting parameters for estimated pulses are used to form feature patterns that were applied to a minimum-distance classifier to realize a speaker identification system with very limited subjects

    Robust simulation and optimization methods for natural gas liquefaction processes

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 313-324).Natural gas is one of the world's leading sources of fuel in terms of both global production and consumption. The abundance of reserves that may be developed at relatively low cost, paired with escalating societal and regulatory pressures to harness low carbon fuels, situates natural gas in a position of growing importance to the global energy landscape. However, the nonuniform distribution of readily-developable natural gas sources around the world necessitates the existence of an international gas market that can serve those regions without reasonable access to reserves. International transmission of natural gas via pipeline is generally cost-prohibitive beyond around two thousand miles, and so suppliers instead turn to the production of liquefied natural gas (LNG) to yield a tradable commodity. While the production of LNG is by no means a new technology, it has not occupied a dominant role in the gas trade to date. However, significant growth in LNG exports has been observed within the last few years, and this trend is expected to continue as major new liquefaction operations have and continue to become operational worldwide. Liquefaction of natural gas is an energy-intensive process requiring specialized cryogenic equipment, and is therefore expensive both in terms of operating and capital costs. However, optimization of liquefaction processes is greatly complicated by the inherently complex thermodynamic behavior of process streams that simultaneously change phase and exchange heat at closely-matched cryogenic temperatures. The determination of optimal conditions for a given process will also generally be nontransferable information between LNG plants, as both the specifics of design (e.g. heat exchanger size and configuration) and the operation (e.g. source gas composition) may have significantly variability between sites. Rigorous evaluation of process concepts for new production facilities is also challenging to perform, as economic objectives must be optimized in the presence of constraints involving equipment size and safety precautions even in the initial design phase. The absence of reliable and versatile software to perform such tasks was the impetus for this thesis project. To address these challenging problems, the aim of this thesis was to develop new models, methods and algorithms for robust liquefaction process simulation and optimization, and to synthesize these advances into reliable and versatile software. Recent advances in the sensitivity analysis of nondifferentiable functions provided an advantageous foundation for the development of physically-informed yet compact process models that could be embedded in established simulation and optimization algorithms with strong convergence properties. Within this framework, a nonsmooth model for the core unit operation in all industrially-relevant liquefaction processes, the multi-stream heat exchanger, was first formulated. The initial multistream heat exchanger model was then augmented to detect and handle internal phase transitions, and an extension of a classic vapor-liquid equilibrium model was proposed to account for the potential existence of solutions in single-phase regimes, all through the use of additional nonsmooth equations. While these initial advances enabled the simulation of liquefaction processes under the conditions of simple, idealized thermodynamic models, it became apparent that these methods would be unable to handle calculations involving nonideal thermophysical property models reliably. To this end, robust nonsmooth extensions of the celebrated inside-out algorithms were developed. These algorithms allow for challenging phase equilibrium calculations to be performed successfully even in the absence of knowledge about the phase regime of the solution, as is the case when model parameters are chosen by a simulation or optimization algorithm. However, this still was not enough to equip realistic liquefaction process models with a completely reliable thermodynamics package, and so new nonsmooth algorithms were designed for the reasonable extrapolation of density from an equation of state under conditions where a given phase does not exist. This procedure greatly enhanced the ability of the nonsmooth inside-out algorithms to converge to physical solutions for mixtures at very high temperature and pressure. These models and submodels were then integrated into a flowsheeting framework to perform realistic simulations of natural gas liquefaction processes robustly, efficiently and with extremely high accuracy. A reliable optimization strategy using an interior-point method and the nonsmooth process models was then developed for complex problem formulations that rigorously minimize thermodynamic irreversibilities. This approach significantly outperforms other strategies proposed in the literature or implemented in commercial software in terms of the ease of initialization, convergence rate and quality of solutions found. The performance observed and results obtained suggest that modeling and optimizing such processes using nondifferentiable models and appropriate sensitivity analysis techniques is a promising new approach to these challenging problems. Indeed, while liquefaction processes motivated this thesis, the majority of the methods described herein are applicable in general to processes with complex thermodynamic or heat transfer considerations embedded. It is conceivable that these models and algorithms could therefore inform a new, robust generation of process simulation and optimization software.by Harry Alexander James Watson.Ph. D

    A system for image-based modeling and photo editing

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2002.Includes bibliographical references (p. 169-178).Traditionally in computer graphics, a scene is represented by geometric primitives composed of various materials and a collection of lights. Recently, techniques for modeling and rendering scenes from a set of pre-acquired images have emerged as an alternative approach, known as image-based modeling and rendering. Much of the research in this field has focused on reconstructing and rerendering from a set of photographs, while little work has been done to address the problem of editing and modifying these scenes. On the other hand, photo-editing systems, such as Adobe Photoshop, provide a powerful, intuitive, and practical means to edit images. However, these systems are limited by their two-dimensional nature. In this thesis, we present a system that extends photo editing to 3D. Starting from a single input image, the system enables the user to reconstruct a 3D representation of the captured scene, and edit it with the ease and versatility of 2D photo editing. The scene is represented as layers of images with depth, where each layer is an image that encodes both color and depth. A suite of user-assisted tools are employed, based on a painting metaphor, to extract layers and assign depths. The system enables editing from different viewpoints, extracting and grouping of image-based objects, and modifying the shape, color, and illumination of these objects. As part of the system, we introduce three powerful new editing tools. These include two new clone brushing tools: the non-distorted clone brush and the structure-preserving clone brush. They permit copying of parts of an image to another via a brush interface, but alleviate distortions due to perspective foreshortening and object geometry.(cont.) The non-distorted clone brush works on arbitrary 3D geometry, while the structure-preserving clone brush, a 2D version, assumes a planar surface, but has the added advantage of working directly in 2D photo-editing systems that lack depth information. The third tool, a texture-illuminance decoupling filter, discounts the effect of illumination on uniformly textured areas by decoupling large- and small-scale features via bilateral filtering. This tool is crucial for relighting and changing the materials of the scene. There are many applications for such a system, for example architectural, lighting and landscape design, entertainment and special effects, games, and virtual TV sets. The system allows the user to superimpose scaled architectural models into real environments, or to quickly paint a desired lighting scheme of an interior, while being able to navigate within the scene for a fully immersive 3D experience. We present examples and results of complex architectural scenes, 360-degree panoramas, and even paintings, where the user can change viewpoints, edit the geometry and materials, and relight the environment.by Byong Mok Oh.Ph.D

    View generated database

    Get PDF
    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics
    corecore