795 research outputs found
Refraction-corrected ray-based inversion for three-dimensional ultrasound tomography of the breast
Ultrasound Tomography has seen a revival of interest in the past decade,
especially for breast imaging, due to improvements in both ultrasound and
computing hardware. In particular, three-dimensional ultrasound tomography, a
fully tomographic method in which the medium to be imaged is surrounded by
ultrasound transducers, has become feasible. In this paper, a comprehensive
derivation and study of a robust framework for large-scale bent-ray ultrasound
tomography in 3D for a hemispherical detector array is presented. Two
ray-tracing approaches are derived and compared. More significantly, the
problem of linking the rays between emitters and receivers, which is
challenging in 3D due to the high number of degrees of freedom for the
trajectory of rays, is analysed both as a minimisation and as a root-finding
problem. The ray-linking problem is parameterised for a convex detection
surface and three robust, accurate, and efficient ray-linking algorithms are
formulated and demonstrated. To stabilise these methods, novel
adaptive-smoothing approaches are proposed that control the conditioning of the
update matrices to ensure accurate linking. The nonlinear UST problem of
estimating the sound speed was recast as a series of linearised subproblems,
each solved using the above algorithms and within a steepest descent scheme.
The whole imaging algorithm was demonstrated to be robust and accurate on
realistic data simulated using a full-wave acoustic model and an anatomical
breast phantom, and incorporating the errors due to time-of-flight picking that
would be present with measured data. This method can used to provide a
low-artefact, quantitatively accurate, 3D sound speed maps. In addition to
being useful in their own right, such 3D sound speed maps can be used to
initialise full-wave inversion methods, or as an input to photoacoustic
tomography reconstructions
Regularization of Limited Memory Quasi-Newton Methods for Large-Scale Nonconvex Minimization
This paper deals with regularized Newton methods, a flexible class of
unconstrained optimization algorithms that is competitive with line search and
trust region methods and potentially combines attractive elements of both. The
particular focus is on combining regularization with limited memory
quasi-Newton methods by exploiting the special structure of limited memory
algorithms. Global convergence of regularization methods is shown under mild
assumptions and the details of regularized limited memory quasi-Newton updates
are discussed including their compact representations.
Numerical results using all large-scale test problems from the CUTEst
collection indicate that our regularized version of L-BFGS is competitive with
state-of-the-art line search and trust-region L-BFGS algorithms and previous
attempts at combining L-BFGS with regularization, while potentially
outperforming some of them, especially when nonmonotonicity is involved.Comment: 23 pages, 4 figure
A second derivative SQP method: local convergence
In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact â„“1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud
\ud
Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud
\ud
Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set
- …