19,220 research outputs found

    A Bayesian approach to discrete object detection in astronomical datasets

    Full text link
    A Bayesian approach is presented for detecting and characterising the signal from discrete objects embedded in a diffuse background. The approach centres around the evaluation of the posterior distribution for the parameters of the discrete objects, given the observed data, and defines the theoretically-optimal procedure for parametrised object detection. Two alternative strategies are investigated: the simultaneous detection of all the discrete objects in the dataset, and the iterative detection of objects. In both cases, the parameter space characterising the object(s) is explored using Markov-Chain Monte-Carlo sampling. For the iterative detection of objects, another approach is to locate the global maximum of the posterior at each iteration using a simulated annealing downhill simplex algorithm. The techniques are applied to a two-dimensional toy problem consisting of Gaussian objects embedded in uncorrelated pixel noise. A cosmological illustration of the iterative approach is also presented, in which the thermal and kinetic Sunyaev-Zel'dovich effects from clusters of galaxies are detected in microwave maps dominated by emission from primordial cosmic microwave background anisotropies.Comment: 20 pages, 12 figures, accepted by MNRAS; contains some additional material in response to referee's comment

    Parallel Deterministic and Stochastic Global Minimization of Functions with Very Many Minima

    Get PDF
    The optimization of three problems with high dimensionality and many local minima are investigated under five different optimization algorithms: DIRECT, simulated annealing, Spall’s SPSA algorithm, the KNITRO package, and QNSTOP, a new algorithm developed at Indiana University

    Improved decision support for engine-in-the-loop experimental design optimization

    Get PDF
    Experimental optimization with hardware in the loop is a common procedure in engineering and has been the subject of intense development, particularly when it is applied to relatively complex combinatorial systems that are not completely understood, or where accurate modelling is not possible owing to the dimensions of the search space. A common source of difficulty arises because of the level of noise associated with experimental measurements, a combination of limited instrument precision, and extraneous factors. When a series of experiments is conducted to search for a combination of input parameters that results in a minimum or maximum response, under the imposition of noise, the underlying shape of the function being optimized can become very difficult to discern or even lost. A common methodology to support experimental search for optimal or suboptimal values is to use one of the many gradient descent methods. However, even sophisticated and proven methodologies, such as simulated annealing, can be significantly challenged in the presence of noise, since approximating the gradient at any point becomes highly unreliable. Often, experiments are accepted as a result of random noise which should be rejected, and vice versa. This is also true for other sampling techniques, including tabu and evolutionary algorithms. After the general introduction, this paper is divided into two main sections (sections 2 and 3), which are followed by the conclusion. Section 2 introduces a decision support methodology based upon response surfaces, which supplements experimental management based on a variable neighbourhood search and is shown to be highly effective in directing experiments in the presence of a significant signal-to-noise ratio and complex combinatorial functions. The methodology is developed on a three-dimensional surface with multiple local minima, a large basin of attraction, and a high signal-to-noise ratio. In section 2, the methodology is applied to an automotive combinatorial search in the laboratory, on a real-time engine-in-the-loop application. In this application, it is desired to find the maximum power output of an experimental single-cylinder spark ignition engine operating under a quasi-constant-volume operating regime. Under this regime, the piston is slowed at top dead centre to achieve combustion in close to constant volume conditions. As part of the further development of the engine to incorporate a linear generator to investigate free-piston operation, it is necessary to perform a series of experiments with combinatorial parameters. The objective is to identify the maximum power point in the least number of experiments in order to minimize costs. This test programme provides peak power data in order to achieve optimal electrical machine design. The decision support methodology is combined with standard optimization and search methods – namely gradient descent and simulated annealing – in order to study the reductions possible in experimental iterations. It is shown that the decision support methodology significantly reduces the number of experiments necessary to find the maximum power solution and thus offers a potentially significant cost saving to hardware-in-the-loop experi- mentation

    Catching Super Massive Black Hole Binaries Without a Net

    Full text link
    The gravitational wave signals from coalescing Supermassive Black Hole Binaries are prime targets for the Laser Interferometer Space Antenna (LISA). With optimal data processing techniques, the LISA observatory should be able to detect black hole mergers anywhere in the Universe. The challenge is to find ways to dig the signals out of a combination of instrument noise and the large foreground from stellar mass binaries in our own galaxy. The standard procedure of matched filtering against a grid of templates can be computationally prohibitive, especially when the black holes are spinning or the mass ratio is large. Here we develop an alternative approach based on Metropolis-Hastings sampling and simulated annealing that is orders of magnitude cheaper than a grid search. We demonstrate our approach on simulated LISA data streams that contain the signals from binary systems of Schwarzschild Black Holes, embedded in instrument noise and a foreground containing 26 million galactic binaries. The search algorithm is able to accurately recover the 9 parameters that describe the black hole binary without first having to remove any of the bright foreground sources, even when the black hole system has low signal-to-noise.Comment: 4 pages, 3 figures, Refined search algorithm, added low SNR exampl

    Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Full text link
    Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.Comment: 17 pages, 8 figures. Minor further revisions. As published in Phys. Rev.
    • …
    corecore