1,128 research outputs found
Regularisation methods for imaging from electrical measurements
In Electrical Impedance Tomography the conductivity of an object is estimated from
boundary measurements. An array of electrodes is attached to the surface of the object
and current stimuli are applied via these electrodes. The resulting voltages are measured.
The process of estimating the conductivity as a function of space inside the object from
voltage measurements at the surface is called reconstruction. Mathematically the ElT
reconstruction is a non linear inverse problem, the stable solution of which requires regularisation
methods. Most common regularisation methods impose that the reconstructed image should
be smooth. Such methods confer stability to the reconstruction process, but limit the
capability of describing sharp variations in the sought parameter.
In this thesis two new methods of regularisation are proposed. The first method, Gallssian
anisotropic regularisation, enhances the reconstruction of sharp conductivity changes
occurring at the interface between a contrasting object and the background. As such
changes are step changes, reconstruction with traditional smoothing regularisation techniques
is unsatisfactory. The Gaussian anisotropic filtering works by incorporating prior
structural information. The approximate knowledge of the shapes of contrasts allows us
to relax the smoothness in the direction normal to the expected boundary. The construction
of Gaussian regularisation filters that express such directional properties on the basis
of the structural information is discussed, and the results of numerical experiments are
analysed. The method gives good results when the actual conductivity distribution is in
accordance with the prior information. When the conductivity distribution violates the
prior information the method is still capable of properly locating the regions of contrast.
The second part of the thesis is concerned with regularisation via the total variation
functional. This functional allows the reconstruction of discontinuous parameters. The
properties of the functional are briefly introduced, and an application in inverse problems
in image denoising is shown. As the functional is non-differentiable, numerical difficulties
are encountered in its use. The aim is therefore to propose an efficient numerical implementation
for application in ElT. Several well known optimisation methods arc analysed,
as possible candidates, by theoretical considerations and by numerical experiments. Such
methods are shown to be inefficient. The application of recent optimisation methods
called primal- dual interior point methods is analysed be theoretical considerations and
by numerical experiments, and an efficient and stable algorithm is developed. Numerical
experiments demonstrate the capability of the algorithm in reconstructing sharp conductivity profiles
Sample-path solutions for simulation optimization problems and stochastic variational inequalities
inequality;simulation;optimization
Custom Integrated Circuits
Contains reports on ten research projects.Analog Devices, Inc.IBM CorporationNational Science Foundation/Defense Advanced Research Projects Agency Grant MIP 88-14612Analog Devices Career Development Assistant ProfessorshipU.S. Navy - Office of Naval Research Contract N0014-87-K-0825AT&TDigital Equipment CorporationNational Science Foundation Grant MIP 88-5876
Variable Selection with False Discovery Control
Technological advances that allow routine identification of high-dimensional risk factors have led to high demand for statistical techniques that enable full utilization of these rich sources of information for genome-wide association studies (GWAS). Variable selection for censored outcome data as well as control of false discoveries (i.e. inclusion of irrelevant variables) in the presence of high-dimensional predictors present serious challenges. In the context of survival analysis with high-dimensional covariates, this paper develops a computationally feasible method for building general risk prediction models, while controlling false discoveries. We have proposed a high-dimensional variable selection method by incorporating stability selection to control false discovery. Comparisons between the proposed method and the commonly used univariate and Lasso approaches for variable selection reveal that the proposed method yields fewer false discoveries. The proposed method is applied to study the associations of 2,339 common single-nucleotide polymorphisms (SNPs) with overall survival among cutaneous melanoma (CM) patients. The results have confirmed that BRCA2 pathway SNPs are likely to be associated with overall survival, as reported by previous literature. Moreover, we have identified several new Fanconi anemia (FA) pathway SNPs that are likely to modulate survival of CM patients
A Review on Different Image De-noising Methods
Image de-noising is a classical yet fundamental problem in low level vision, as well as an ideal test bed to evaluate various statistical image modeling methods. The restoration of a blurry or noisy image is commonly performed with a MAP estimator, which maximizes a posterior probability to reconstruct a clean image from a degraded image. A MAP estimator, when us ed with a sparse gradient image prior, reconstructs piecewise smooth images and typically removes textures that are important for visual realism. One of the most challenging problems in image de - noising is how to preserve the fine scale texture structures while removing noise. Various natural image priors, such as gradient based prior, nonlocal self - similarity prior, and sparsity prior, have been extensively exploited for noise removal. The de - noising algorithms based on these priors, however, tend to smoo th the detailed image textures, degrading the image visual quality. To address this problem, we propose a texture enhanced image de - noising (TEID) method by enforcing the gradient distribution of the de - noised image to be close to the estimated gradient d istribution of the original image. Another method is an alternative de - convolution method called iterative distribution reweighting (IDR) which imposes a global constraint on gradients so that are constructed image should have a gradient distribution simil ar to a reference distribution
Simulating Galaxy Formation
A review on numerical simulations of galaxy formation is given. Different
numerical methods to solve collisionless and gas dynamical systems are outlined
and one particular simulation technique, Smoothed Particle Hydrodynamics, is
discussed in some detail. After a short discussion of the most relevant
physical processes which affect the dynamics of the gas, the success and
shortcomings of state of the art simulations are discussed via the example of
the formation of disk galaxies.Comment: 24 pages, uuencoded postscript file, 5 figures, 2 figures included
Proc. ``International School of Physics Enrico Fermi'', Course CXXXII: Dark
Matter in the Universe, Varenna 1995, eds.: S. Bonometto, J. Primack, A.
Provenzale, IOP, to appear; complete version available at
http://www.mpa-garching.mpg.de/Galaxien/prep.htm
Efficient upwind algorithms for solution of the Euler and Navier-stokes equations
An efficient three-dimensionasl tructured solver for the Euler and
Navier-Stokese quations is developed based on a finite volume upwind algorithm
using Roe fluxes. Multigrid and optimal smoothing multi-stage time stepping accelerate convergence. The accuracy of the new solver is demonstrated for inviscid
flows in the range 0.675 :5M :5 25. A comparative grid convergence study for
transonic turbulent flow about a wing is conducted with the present solver and
a scalar dissipation central difference industrial design solver. The upwind solver
demonstrates faster grid convergence than the central scheme, producing more
consistent estimates of lift, drag and boundary layer parameters. In transonic
viscous computations, the upwind scheme with convergence acceleration is over
20 times more efficient than without it. The ability of the upwind solver to compute
viscous flows of comparable accuracy to scalar dissipation central schemes
on grids of one-quarter the density make it a more accurate, cost effective alternative.
In addition, an original convergencea cceleration method termed shock
acceleration is proposed. The method is designed to reduce the errors caused by
the shock wave singularity M -+ 1, based on a localized treatment of discontinuities.
Acceleration models are formulated for an inhomogeneous PDE in one
variable. Results for the Roe and Engquist-Osher schemes demonstrate an order
of magnitude improvement in the rate of convergence. One of the acceleration
models is extended to the quasi one-dimensiona Euler equations for duct flow.
Results for this case d monstrate a marked increase in convergence with negligible
loss in accuracy when the acceleration procedure is applied after the shock
has settled in its final cell. Typically, the method saves up to 60% in computational
expense. Significantly, the performance gain is entirely at the expense of
the error modes associated with discrete shock structure. In view of the success
achieved, further development of the method is proposed
Meta-Learning Evolutionary Artificial Neural Networks
In this paper, we present MLEANN (Meta-Learning Evolutionary Artificial
Neural Network), an automatic computational framework for the adaptive
optimization of artificial neural networks wherein the neural network
architecture, activation function, connection weights; learning algorithm and
its parameters are adapted according to the problem. We explored the
performance of MLEANN and conventionally designed artificial neural networks
for function approximation problems. To evaluate the comparative performance,
we used three different well-known chaotic time series. We also present the
state of the art popular neural network learning algorithms and some
experimentation results related to convergence speed and generalization
performance. We explored the performance of backpropagation algorithm;
conjugate gradient algorithm, quasi-Newton algorithm and Levenberg-Marquardt
algorithm for the three chaotic time series. Performances of the different
learning algorithms were evaluated when the activation functions and
architecture were changed. We further present the theoretical background,
algorithm, design strategy and further demonstrate how effective and inevitable
is the proposed MLEANN framework to design a neural network, which is smaller,
faster and with a better generalization performance
- …