1,940 research outputs found

    Singular value decomposition applied to compact binary coalescence gravitational-wave signals

    Get PDF
    We investigate the application of the singular value decomposition to compact-binary, gravitational-wave data-analysis. We find that the truncated singular value decomposition reduces the number of filters required to analyze a given region of parameter space of compact binary coalescence waveforms by an order of magnitude with high reconstruction accuracy. We also compute an analytic expression for the expected signal-loss due to the singular value decomposition truncation.Comment: 4 figures, 6 page

    ZAP -- Enhanced PCA Sky Subtraction for Integral Field Spectroscopy

    Full text link
    We introduce Zurich Atmosphere Purge (ZAP), an approach to sky subtraction based on principal component analysis (PCA) that we have developed for the Multi Unit Spectrographic Explorer (MUSE) integral field spectrograph. ZAP employs filtering and data segmentation to enhance the inherent capabilities of PCA for sky subtraction. Extensive testing shows that ZAP reduces sky emission residuals while robustly preserving the flux and line shapes of astronomical sources. The method works in a variety of observational situations from sparse fields with a low density of sources to filled fields in which the target source fills the field of view. With the inclusion of both of these situations the method is generally applicable to many different science cases and should also be useful for other instrumentation. ZAP is available for download at http://muse-vlt.eu/science/tools.Comment: 12 pages, 7 figures, 1 table. Accepted to MNRA

    A comparative study on global wavelet and polynomial models for nonlinear regime-switching systems

    Get PDF
    A comparative study of wavelet and polynomial models for non-linear Regime-Switching (RS) systems is carried out. RS systems, considered in this study, are a class of severely non-linear systems, which exhibit abrupt changes or dramatic breaks in behaviour, due to RS caused by associated events. Both wavelet and polynomial models are used to describe discontinuous dynamical systems, where it is assumed that no a priori information about the inherent model structure and the relative regime switches of the underlying dynamics is known, but only observed input-output data are available. An Orthogonal Least Squares (OLS) algorithm interfered with by an Error Reduction Ratio (ERR) index and regularised by an Approximate Minimum Description Length (AMDL) criterion, is used to construct parsimonious wavelet and polynomial models. The performance of the resultant wavelet models is compared with that of the relative polynomial models, by inspecting the predictive capability of the associated representations. It is shown from numerical results that wavelet models are superior to polynomial models, in respect of generalisation properties, for describing severely non-linear RS systems

    A machine learning approach to portfolio pricing and risk management for high-dimensional problems

    Full text link
    We present a general framework for portfolio risk management in discrete time, based on a replicating martingale. This martingale is learned from a finite sample in a supervised setting. The model learns the features necessary for an effective low-dimensional representation, overcoming the curse of dimensionality common to function approximation in high-dimensional spaces. We show results based on polynomial and neural network bases. Both offer superior results to naive Monte Carlo methods and other existing methods like least-squares Monte Carlo and replicating portfolios.Comment: 30 pages (main), 10 pages (appendix), 3 figures, 22 table

    Wavelet methods in speech recognition

    Get PDF
    In this thesis, novel wavelet techniques are developed to improve parametrization of speech signals prior to classification. It is shown that non-linear operations carried out in the wavelet domain improve the performance of a speech classifier and consistently outperform classical Fourier methods. This is because of the localised nature of the wavelet, which captures correspondingly well-localised time-frequency features within the speech signal. Furthermore, by taking advantage of the approximation ability of wavelets, efficient representation of the non-stationarity inherent in speech can be achieved in a relatively small number of expansion coefficients. This is an attractive option when faced with the so-called 'Curse of Dimensionality' problem of multivariate classifiers such as Linear Discriminant Analysis (LDA) or Artificial Neural Networks (ANNs). Conventional time-frequency analysis methods such as the Discrete Fourier Transform either miss irregular signal structures and transients due to spectral smearing or require a large number of coefficients to represent such characteristics efficiently. Wavelet theory offers an alternative insight in the representation of these types of signals. As an extension to the standard wavelet transform, adaptive libraries of wavelet and cosine packets are introduced which increase the flexibility of the transform. This approach is observed to be yet more suitable for the highly variable nature of speech signals in that it results in a time-frequency sampled grid that is well adapted to irregularities and transients. They result in a corresponding reduction in the misclassification rate of the recognition system. However, this is necessarily at the expense of added computing time. Finally, a framework based on adaptive time-frequency libraries is developed which invokes the final classifier to choose the nature of the resolution for a given classification problem. The classifier then performs dimensionaIity reduction on the transformed signal by choosing the top few features based on their discriminant power. This approach is compared and contrasted to an existing discriminant wavelet feature extractor. The overall conclusions of the thesis are that wavelets and their relatives are capable of extracting useful features for speech classification problems. The use of adaptive wavelet transforms provides the flexibility within which powerful feature extractors can be designed for these types of application

    Effective-one-body waveforms for binary neutron stars using surrogate models

    Get PDF
    Gravitational-wave observations of binary neutron star systems can provide information about the masses, spins, and structure of neutron stars. However, this requires accurate and computationally efficient waveform models that take <1s to evaluate for use in Bayesian parameter estimation codes that perform 10^7 - 10^8 waveform evaluations. We present a surrogate model of a nonspinning effective-one-body waveform model with l = 2, 3, and 4 tidal multipole moments that reproduces waveforms of binary neutron star numerical simulations up to merger. The surrogate is built from compact sets of effective-one-body waveform amplitude and phase data that each form a reduced basis. We find that 12 amplitude and 7 phase basis elements are sufficient to reconstruct any binary neutron star waveform with a starting frequency of 10Hz. The surrogate has maximum errors of 3.8% in amplitude (0.04% excluding the last 100M before merger) and 0.043 radians in phase. The version implemented in the LIGO Algorithm Library takes ~0.07s to evaluate for a starting frequency of 30Hz and ~0.8s for a starting frequency of 10Hz, resulting in a speed-up factor of ~10^3 - 10^4 relative to the original Matlab code. This allows parameter estimation codes to run in days to weeks rather than years, and we demonstrate this with a Nested Sampling run that recovers the masses and tidal parameters of a simulated binary neutron star system.Comment: 17 pages, 11 figures, submitted to PR
    • …
    corecore