261 research outputs found

    Training Strategies for Deep Learning Gravitational-Wave Searches

    Get PDF
    Compact binary systems emit gravitational radiation which is potentially detectable by current Earth bound detectors. Extracting these signals from the instruments' background noise is a complex problem and the computational cost of most current searches depends on the complexity of the source model. Deep learning may be capable of finding signals where current algorithms hit computational limits. Here we restrict our analysis to signals from non-spinning binary black holes and systematically test different strategies by which training data is presented to the networks. To assess the impact of the training strategies, we re-analyze the first published networks and directly compare them to an equivalent matched-filter search. We find that the deep learning algorithms can generalize low signal-to-noise ratio (SNR) signals to high SNR ones but not vice versa. As such, it is not beneficial to provide high SNR signals during training, and fastest convergence is achieved when low SNR samples are provided early on. During testing we found that the networks are sometimes unable to recover any signals when a false alarm probability <10−3<10^{-3} is required. We resolve this restriction by applying a modification we call unbounded Softmax replacement (USR) after training. With this alteration we find that the machine learning search retains ≄97.5%\geq 97.5\% of the sensitivity of the matched-filter search down to a false-alarm rate of 1 per month

    Improvement of the realisation of the mass scale

    Get PDF
    The project 19RPT02“Improvement of the realisation of the mass scale”(EMPIR [1] Call 2019 –Energy, Environment, Normative and Research Potential)has just started.Its aim is to improve the quality of one of the most important tasksin mass metrology,the realisation of the mass scale. After the new definition of the kilogram this technique is getting more important

    Energy levels in polarization superlattices: a comparison of continuum strain models

    Full text link
    A theoretical model for the energy levels in polarization superlattices is presented. The model includes the effect of strain on the local polarization-induced electric fields and the subsequent effect on the energy levels. Two continuum strain models are contrasted. One is the standard strain model derived from Hooke's law that is typically used to calculate energy levels in polarization superlattices and quantum wells. The other is a fully-coupled strain model derived from the thermodynamic equation of state for piezoelectric materials. The latter is more complete and applicable to strongly piezoelectric materials where corrections to the standard model are significant. The underlying theory has been applied to AlGaN/GaN superlattices and quantum wells. It is found that the fully-coupled strain model yields very different electric fields from the standard model. The calculated intersubband transition energies are shifted by approximately 5 -- 19 meV, depending on the structure. Thus from a device standpoint, the effect of applying the fully-coupled model produces a very measurable shift in the peak wavelength. This result has implications for the design of AlGaN/GaN optical switches.Comment: Revtex

    Bridging Nano and Micro-scale X-ray Tomography for Battery Research by Leveraging Artificial Intelligence

    Full text link
    X-ray Computed Tomography (X-ray CT) is a well-known non-destructive imaging technique where contrast originates from the materials' absorption coefficients. Novel battery characterization studies on increasingly challenging samples have been enabled by the rapid development of both synchrotron and laboratory-scale imaging systems as well as innovative analysis techniques. Furthermore, the recent development of laboratory nano-scale CT (NanoCT) systems has pushed the limits of battery material imaging towards voxel sizes previously achievable only using synchrotron facilities. Such systems are now able to reach spatial resolutions down to 50 nm. Given the non-destructive nature of CT, in-situ and operando studies have emerged as powerful methods to quantify morphological parameters, such as tortuosity factor, porosity, surface area, and volume expansion during battery operation or cycling. Combined with powerful Artificial Intelligence (AI)/Machine Learning (ML) analysis techniques, extracted 3D tomograms and battery-specific morphological parameters enable the development of predictive physics-based models that can provide valuable insights for battery engineering. These models can predict the impact of the electrode microstructure on cell performances or analyze the influence of material heterogeneities on electrochemical responses. In this work, we review the increasing role of X-ray CT experimentation in the battery field, discuss the incorporation of AI/ML in analysis, and provide a perspective on how the combination of multi-scale CT imaging techniques can expand the development of predictive multiscale battery behavioral models.Comment: 33 pages, 5 figure

    MLGWSC-1: The first Machine Learning Gravitational-Wave Search Mock Data Challenge

    Get PDF
    We present the results of the first Machine Learning Gravitational-Wave Search Mock Data Challenge (MLGWSC-1). For this challenge, participating groups had to identify gravitational-wave signals from binary black hole mergers of increasing complexity and duration embedded in progressively more realistic noise. The final of the 4 provided datasets contained real noise from the O3a observing run and signals up to a duration of 20 seconds with the inclusion of precession effects and higher order modes. We present the average sensitivity distance and runtime for the 6 entered algorithms derived from 1 month of test data unknown to the participants prior to submission. Of these, 4 are machine learning algorithms. We find that the best machine learning based algorithms are able to achieve up to 95% of the sensitive distance of matched-filtering based production analyses for simulated Gaussian noise at a false-alarm rate (FAR) of one per month. In contrast, for real noise, the leading machine learning search achieved 70%. For higher FARs the differences in sensitive distance shrink to the point where select machine learning submissions outperform traditional search algorithms at FARs ≄200\geq 200 per month on some datasets. Our results show that current machine learning search algorithms may already be sensitive enough in limited parameter regions to be useful for some production settings. To improve the state-of-the-art, machine learning algorithms need to reduce the false-alarm rates at which they are capable of detecting signals and extend their validity to regions of parameter space where modeled searches are computationally expensive to run. Based on our findings we compile a list of research areas that we believe are the most important to elevate machine learning searches to an invaluable tool in gravitational-wave signal detection

    MLGWSC-1: The first Machine Learning Gravitational-Wave Search Mock Data Challenge

    Get PDF
    We present the results of the first Machine Learning Gravitational-Wave Search Mock Data Challenge (MLGWSC-1). For this challenge, participating groups had to identify gravitational-wave signals from binary black hole mergers of increasing complexity and duration embedded in progressively more realistic noise. The final of the 4 provided datasets contained real noise from the O3a observing run and signals up to a duration of 20 seconds with the inclusion of precession effects and higher order modes. We present the average sensitivity distance and runtime for the 6 entered algorithms derived from 1 month of test data unknown to the participants prior to submission. Of these, 4 are machine learning algorithms. We find that the best machine learning based algorithms are able to achieve up to 95% of the sensitive distance of matched-filtering based production analyses for simulated Gaussian noise at a false-alarm rate (FAR) of one per month. In contrast, for real noise, the leading machine learning search achieved 70%. For higher FARs the differences in sensitive distance shrink to the point where select machine learning submissions outperform traditional search algorithms at FARs ≄200\geq 200 per month on some datasets. Our results show that current machine learning search algorithms may already be sensitive enough in limited parameter regions to be useful for some production settings. To improve the state-of-the-art, machine learning algorithms need to reduce the false-alarm rates at which they are capable of detecting signals and extend their validity to regions of parameter space where modeled searches are computationally expensive to run. Based on our findings we compile a list of research areas that we believe are the most important to elevate machine learning searches to an invaluable tool in gravitational-wave signal detection.Comment: 25 pages, 6 figures, 4 tables, additional material available at https://github.com/gwastro/ml-mock-data-challenge-
    • 

    corecore