374 research outputs found

    Regularisation methods for imaging from electrical measurements

    Get PDF
    In Electrical Impedance Tomography the conductivity of an object is estimated from boundary measurements. An array of electrodes is attached to the surface of the object and current stimuli are applied via these electrodes. The resulting voltages are measured. The process of estimating the conductivity as a function of space inside the object from voltage measurements at the surface is called reconstruction. Mathematically the ElT reconstruction is a non linear inverse problem, the stable solution of which requires regularisation methods. Most common regularisation methods impose that the reconstructed image should be smooth. Such methods confer stability to the reconstruction process, but limit the capability of describing sharp variations in the sought parameter. In this thesis two new methods of regularisation are proposed. The first method, Gallssian anisotropic regularisation, enhances the reconstruction of sharp conductivity changes occurring at the interface between a contrasting object and the background. As such changes are step changes, reconstruction with traditional smoothing regularisation techniques is unsatisfactory. The Gaussian anisotropic filtering works by incorporating prior structural information. The approximate knowledge of the shapes of contrasts allows us to relax the smoothness in the direction normal to the expected boundary. The construction of Gaussian regularisation filters that express such directional properties on the basis of the structural information is discussed, and the results of numerical experiments are analysed. The method gives good results when the actual conductivity distribution is in accordance with the prior information. When the conductivity distribution violates the prior information the method is still capable of properly locating the regions of contrast. The second part of the thesis is concerned with regularisation via the total variation functional. This functional allows the reconstruction of discontinuous parameters. The properties of the functional are briefly introduced, and an application in inverse problems in image denoising is shown. As the functional is non-differentiable, numerical difficulties are encountered in its use. The aim is therefore to propose an efficient numerical implementation for application in ElT. Several well known optimisation methods arc analysed, as possible candidates, by theoretical considerations and by numerical experiments. Such methods are shown to be inefficient. The application of recent optimisation methods called primal- dual interior point methods is analysed be theoretical considerations and by numerical experiments, and an efficient and stable algorithm is developed. Numerical experiments demonstrate the capability of the algorithm in reconstructing sharp conductivity profiles

    VIDEO PREPROCESSING BASED ON HUMAN PERCEPTION FOR TELESURGERY

    Get PDF
    Video transmission plays a critical role in robotic telesurgery because of the high bandwidth and high quality requirement. The goal of this dissertation is to find a preprocessing method based on human visual perception for telesurgical video, so that when preprocessed image sequences are passed to the video encoder, the bandwidth can be reallocated from non-essential surrounding regions to the region of interest, ensuring excellent image quality of critical regions (e.g. surgical region). It can also be considered as a quality control scheme that will gracefully degrade the video quality in the presence of network congestion. The proposed preprocessing method can be separated into two major parts. First, we propose a time-varying attention map whose value is highest at the gazing point and falls off progressively towards the periphery. Second, we propose adaptive spatial filtering and the parameters of which are adjusted according to the attention map. By adding visual adaptation to the spatial filtering, telesurgical video data can be compressed efficiently because of the high degree of visual redundancy removal by our algorithm. Our experimental results have shown that with the proposed preprocessing method, over half of the bandwidth can be reduced while there is no significant visual effect for the observer. We have also developed an optimal parameter selecting algorithm, so that when the network bandwidth is limited, the overall visual distortion after preprocessing is minimized

    A direct approach to computer modelling of fluids

    Get PDF
    Conventional approaches to Computational Fluid Dynamics (CFD) are highly mathematical in content and presentation, and physical interpretation of the algorithms can often be obscure. This is believed to inhibit advances in the CFD field and the importance of such advances for Naval Architecture, as a particular application, is discussed. As a possible alternative to conventional methods, a "direct" approach to computer modelling of fluids is proposed where all the algorithms involved are "physically transparent" in that they avoid intermediate mathematical interpretations. Rules for the development of such a model are formulated, and a programming strategy, which advocates modularising the algorithms to reflect the cause and effect mechanisms in real fluids, is outlined. The principles of the direct modelling approach are demonstrated in the development of a computer program for 2-dimensional, incompressible, inviscid flows. The technique requires that the total pressure in a flow is decomposed into two principal components, the temporal pressure and the convective pressure, associated respectively with the temporal and convective accelerations of the fluid. The model incorporates a numerically "explicit" pressure spreading algorithm for determining the temporal pressure and acceleration responses to external disturbances. The actual compressibility of the "incompressible" fluid is modelled via the bulk modulus. Convective pressure is synthesised as flow develops by accounting for the small spatial variations in the fluid's density associated with the temporal pressure field. Simple internal flows, and the acceleration of bodies at or near a free-surface, have been modelled successfully. Flows with a finite free-surface distortion or system geometry change will require the incorporation of grid re-generation algorithms for the spatial discretisation. Routes for future developments, including viscous modelling, are discussed. Apart from potential advantages for CFD, the direct approach should benefit general fluid dynamics education since the concepts involved promote a better understanding of fluid behaviour

    A Parallel Implementation of the Newton's Method in Solving Steady State Navier-Stokes Equations for Hypersonic Viscous Flows. alpha-GMRES: A New Parallelisable Iterative Solver for Large Sparse Non-Symmetric Linear Systems

    Get PDF
    The motivation for this thesis is to develop a parallelizable fully implicit numerical Navier-Stokes solver for hypersonic viscous flows. The existence of strong shock waves, thin shear layers and strong flow interactions in hypersonic viscous flows requires the use of a high order high resolution scheme for the discretisation of the Navier-Stokes equations in order to achieve an accurate numerical simulation. However, high order high resolution schemes usually involve a more complicated formulation and thus longer computation time as compared to the simpler central differencing scheme. Therefore, the acceleration of the convergence of high order high resolution schemes becomes an increasingly important issue. For steady state solutions of the Navier-Stokes equations a time dependent approach is usually followed using the unsteady governing equations, which can be discretised in time by an explicit or an implicit method. Using an implicit method, unconditional stability can be achieved and as the time step approaches infinity the method approaches the Newton's method, which is equivalent to directly applying the Newton's method for solving the N-dimensional non-linear algebraic system arising from the spatial discretisation of the steady governing equations in the global flowfield. The quadratic convergence may be achieved by using the Newton's method. However one main drawback of the Newton's method is that it is memory intensive, since the Jacobian matrix of the non-linear algebraic system generally needs to be stored. Therefore it is necessary to use a parallel computing environment in order to tackle substantial problems. In the thesis the hypersonic laminar flow over a sharp cone at high angle of attack provides test cases. The flow is adequately modelled by the steady state locally conical Navier-Stokes (LCNS) equations. A structured grid is used since otherwise there are difficulties in generating the unstructured Jacobian matrix. A conservative cell centred finite volume formulation is used for the spatial discretisation. The schemes used for evaluating the fluxes on the cell boundaries are Osher's flux difference splitting scheme, which has continuous first partial derivatives, together with the third order MUSCL (Monotone Upwind Schemes for Conservation Law) scheme for the convective fluxes and the second order central difference scheme for the diffusive fluxes. In developing the Newton's method a simplified approximate procedure has been proposed for the generation of the numerically approximate Jacobian matrix that speeds up the computation and reduces the extent of cells in which the discretised physical state variables need to be used in generating the matrix element. For solving the large sparse non- symmetric linear system in each Newton's iterative step the ?-GMRES linear solver has been developed, which is a robust and efficient scheme in sequential computation. Since the linear solver is designed for generality it is hoped to apply the method for solving similar large sparse non-symmetric linear systems that may occur in other research areas. Writing code for this linear solver is also found to be easy. The parallel computation assigns the computational task of the global domain to multiple processors. It is based on a new decomposition method for the Nth order Jacobian matrix, in which each processor stores the non-zero elements in a certain number of columns of the matrix. The data is stored without overlap and it provides the main storage of the present algorithm. Corresponding to the matrix decomposition method any N-dimensional vector decomposition can be carried out. From the parallel computation point of view, the new procedure for the generation of the numerically approximate Jacobian matrix decreases the memory required in each processor. The alpha-GMRES linear solver is also parallelizable without any sequential bottle-neck, and has a high parallel efficiency. This linear solver plays a key role in the parallelization of an implicit numerical algorithm. The overall numerical algorithm has been implemented in both sequential and parallel computers using both the sequential algorithm version and its parallel counterpart respectively. Since the parallel numerical algorithm is on the global domain and does not change any solution procedure compared with its sequential counterpart, the convergence and the accuracy are maintained compared with the implementation on a single sequential computer. The computers used are IBM RISC system/6000 320H workstation and a Meiko Computer Surface, composed of T800 transputers

    A Functional Data Analysis Approach to Looking at Handwriting Data

    Full text link
    Honors (Bachelor's)StatisticsUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/120606/1/mtianwen.pd

    Acta Cybernetica : Volume 20. Number 1.

    Get PDF

    Automatic constraint-based synthesis of non-uniform rational B-spline surfaces

    Get PDF
    In this dissertation a technique for the synthesis of sculptured surface models subject to several constraints based on design and manufacturability requirements is presented. A design environment is specified as a collection of polyhedral models which represent components in the vicinity of the surface to be designed, or regions which the surface should avoid. Non-uniform rational B-splines (NURBS) are used for surface representation, and the control point locations are the design variables. For some problems the NURBS surface knots and/or weights are included as additional design variables. The primary functional constraint is a proximity metric which induces the surface to avoid a tolerance envelope around each component. Other functional constraints include: an area/arc-length constraint to counteract the expansion effect of the proximity constraint, orthogonality and parametric flow constraints (to maintain consistent surface topology and improve machinability of the surface), and local constraints on surface derivatives to exploit part symmetry. In addition, constraints based on surface curvatures may be incorporated to enhance machinability and induce the synthesis of developable surfaces;The surface synthesis problem is formulated as an optimization problem. Traditional optimization techniques such as quasi-Newton, Nelder-Mead simplex and conjugate gradient, yield only locally good surface models. Consequently, simulated annealing (SA), a global optimization technique is implemented. SA successfully synthesizes several highly multimodal surface models where the traditional optimization methods failed. Results indicate that this technique has potential applications as a conceptual design tool supporting concurrent product and process development methods

    Proceedings of the NASA Workshop on Density Estimation and Function Smoothing

    Get PDF
    Statistical model identification techniques being developed to provide workable solutions to problems in density estimation and function smoothing are examined

    Mesh generation for implicit geometries

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2005.Includes bibliographical references (p. 119-126).We present new techniques for generation of unstructured meshes for geometries specified by implicit functions. An initial mesh is iteratively improved by solving for a force equilibrium in the element edges, and the boundary nodes are projected using the implicit geometry definition. Our algorithm generalizes to any dimension and it typically produces meshes of very high quality. We show a simplified version of the method in just one page of MATLAB code, and we describe how to improve and extend our implementation. Prior to generating the mesh we compute a mesh size function to specify the desired size of the elements. We have developed algorithms for automatic generation of size functions, adapted to the curvature and the feature size of the geometry. We propose a new method for limiting the gradients in the size function by solving a non-linear partial differential equation. We show that the solution to our gradient limiting equation is optimal for convex geometries, and we discuss efficient methods to solve it numerically. The iterative nature of the algorithm makes it particularly useful for moving meshes, and we show how to combine it with the level set method for applications in fluid dynamics, shape optimization, and structural deformations. It is also appropriate for numerical adaptation, where the previous mesh is used to represent the size function and as the initial mesh for the refinements. Finally, we show how to generate meshes for regions in images by using implicit representations.by Per-Olof Persson.Ph.D
    • …
    corecore