43,366 research outputs found

    Targeted Excited State Algorithms

    Get PDF
    To overcome the limitations of the traditional state-averaging approaches in excited state calculations, where one solves for and represents all states between the ground state and excited state of interest, we have investigated a number of new excited state algorithms. Building on the work of van der Vorst and Sleijpen (SIAM J. Matrix Anal. Appl., 17, 401 (1996)), we have implemented Harmonic Davidson and State-Averaged Harmonic Davidson algorithms within the context of the Density Matrix Renormalization Group (DMRG). We have assessed their accuracy and stability of convergence in complete active space DMRG calculations on the low-lying excited states in the acenes ranging from naphthalene to pentacene. We find that both algorithms offer increased accuracy over the traditional State-Averaged Davidson approach, and in particular, the State-Averaged Harmonic Davidson algorithm offers an optimal combination of accuracy and stability in convergence

    Maximum-a-posteriori estimation with Bayesian confidence regions

    Full text link
    Solutions to inverse problems that are ill-conditioned or ill-posed may have significant intrinsic uncertainty. Unfortunately, analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems. As a result, while most modern mathematical imaging methods produce impressive point estimation results, they are generally unable to quantify the uncertainty in the solutions delivered. This paper presents a new general methodology for approximating Bayesian high-posterior-density credibility regions in inverse problems that are convex and potentially very high-dimensional. The approximations are derived by using recent concentration of measure results related to information theory for log-concave random vectors. A remarkable property of the approximations is that they can be computed very efficiently, even in large-scale problems, by using standard convex optimisation techniques. In particular, they are available as a by-product in problems solved by maximum-a-posteriori estimation. The approximations also have favourable theoretical properties, namely they outer-bound the true high-posterior-density credibility regions, and they are stable with respect to model dimension. The proposed methodology is illustrated on two high-dimensional imaging inverse problems related to tomographic reconstruction and sparse deconvolution, where the approximations are used to perform Bayesian hypothesis tests and explore the uncertainty about the solutions, and where proximal Markov chain Monte Carlo algorithms are used as benchmark to compute exact credible regions and measure the approximation error

    Project FATIMA Final Report: Part 2

    Get PDF
    The final report of project FATIMA is presented in two parts. Part 1 contains a summary of the FATIMA method and sets out the key recommendations in terms of policies and optimisation methodology from both project OPTIMA and project FATIMA. Part 1 is thus directed particularly towards policy makers. Part 2 contains the details of the methodology, including the formulation of the objective functions, the optimisation process, the resulting optimal strategies under the various objective function regimes and a summary of the feasibility and acceptability of the optimal strategies based on consultations with the city authorities. This part is thus mainly aimed at the professional in transport planning and modelling

    Minimum Density Hyperplanes

    Get PDF
    Associating distinct groups of objects (clusters) with contiguous regions of high probability density (high-density clusters), is central to many statistical and machine learning approaches to the classification of unlabelled data. We propose a novel hyperplane classifier for clustering and semi-supervised classification which is motivated by this objective. The proposed minimum density hyperplane minimises the integral of the empirical probability density function along it, thereby avoiding intersection with high density clusters. We show that the minimum density and the maximum margin hyperplanes are asymptotically equivalent, thus linking this approach to maximum margin clustering and semi-supervised support vector classifiers. We propose a projection pursuit formulation of the associated optimisation problem which allows us to find minimum density hyperplanes efficiently in practice, and evaluate its performance on a range of benchmark datasets. The proposed approach is found to be very competitive with state of the art methods for clustering and semi-supervised classification

    Full Wave Form Inversion for Seismic Data

    Get PDF
    In seismic wave inversion, seismic waves are sent into the ground and then observed at many receiving points with the aim of producing high-resolution images of the geological underground details. The challenge presented by Saudi Aramco is to solve the inverse problem for multiple point sources on the full elastic wave equation, taking into account all frequencies for the best resolution. The state-of-the-art methods use optimisation to find the seismic properties of the rocks, such that when used as the coefficients of the equations of a model, the measurements are reproduced as closely as possible. This process requires regularisation if one is to avoid instability. The approach can produce a realistic image but does not account for uncertainty arising, in general, from the existence of many different patterns of properties that also reproduce the measurements. In the Study Group a formulation of the problem was developed, based upon the principles of Bayesian statistics. First the state-of-the-art optimisation method was shown to be a special case of the Bayesian formulation. This result immediately provides insight into the most appropriate regularisation methods. Then a practical implementation of a sequential sampling algorithm, using forms of the Ensemble Kalman Filter, was devised and explored

    On accuracy of PDF divergence estimators and their applicability to representative data sampling

    Get PDF
    Generalisation error estimation is an important issue in machine learning. Cross-validation traditionally used for this purpose requires building multiple models and repeating the whole procedure many times in order to produce reliable error estimates. It is however possible to accurately estimate the error using only a single model, if the training and test data are chosen appropriately. This paper investigates the possibility of using various probability density function divergence measures for the purpose of representative data sampling. As it turned out, the first difficulty one needs to deal with is estimation of the divergence itself. In contrast to other publications on this subject, the experimental results provided in this study show that in many cases it is not possible unless samples consisting of thousands of instances are used. Exhaustive experiments on the divergence guided representative data sampling have been performed using 26 publicly available benchmark datasets and 70 PDF divergence estimators, and their results have been analysed and discussed
    corecore