82 research outputs found

    Danae++: A smart approach for denoising underwater attitude estimation

    Get PDF
    One of the main issues for the navigation of underwater robots consists in accurate vehicle positioning, which heavily depends on the orientation estimation phase. The systems employed to this end are affected by different noise typologies, mainly related to the sensors and to the irregular noise of the underwater environment. Filtering algorithms can reduce their effect if opportunely con-figured, but this process usually requires fine techniques and time. This paper presents DANAE++, an improved denoising autoencoder based on DANAE (deep Denoising AutoeNcoder for Attitude Estimation), which is able to recover Kalman Filter (KF) IMU/AHRS orientation estimations from any kind of noise, independently of its nature. This deep learning-based architecture already proved to be robust and reliable, but in its enhanced implementation significant improvements are obtained in terms of both results and performance. In fact, DANAE++ is able to denoise the three angles describing the attitude at the same time, and that is verified also using the estimations provided by an extended KF. Further tests could make this method suitable for real-time applications in navigation tasks

    Detecting and Denoising Gravitational Wave Signals from Binary Black Holes using Deep Learning

    Full text link
    We present a convolutional neural network, designed in the auto-encoder configuration that can detect and denoise astrophysical gravitational waves from merging black hole binaries, orders of magnitude faster than the conventional matched-filtering based detection that is currently employed at advanced LIGO (aLIGO). The Neural-Net architecture is such that it learns from the sparse representation of data in the time-frequency domain and constructs a non-linear mapping function that maps this representation into two separate masks for signal and noise, facilitating the separation of the two, from raw data. This approach is the first of its kind to apply machine learning based gravitational wave detection/denoising in the 2D representation of gravitational wave data. We applied our formalism to the first gravitational wave event detected, GW150914, successfully recovering the signal at all three phases of coalescence at both detectors. This method is further tested on the gravitational wave data from the second observing run (O2O2) of aLIGO, reproducing all binary black hole mergers detected in O2O2 at both the aLIGO detectors. The Neural-Net seems to have uncovered a pattern of 'ringing' after the ringdown phase of the coalescence, which is not a feature that is present in the conventional binary merger templates. This method can also interpolate and extrapolate between modeled templates and explore gravitational waves that are unmodeled and hence not present in the template bank of signals used in the matched-filtering detection pipelines. Faster and efficient detection schemes, such as this method, will be instrumental as ground based detectors reach their design sensitivity, likely to result in several hundreds of potential detections in a few months of observing runs.Comment: 15 pages, 11 figure

    Dawning of a New Era in Gravitational Wave Data Analysis: Unveiling Cosmic Mysteries via Artificial Intelligence -- A Systematic Review

    Full text link
    Background: Artificial intelligence (AI), with its vast capabilities, has become an integral part of our daily interactions, particularly with the rise of sophisticated models like Large Language Models. These advancements have not only transformed human-machine interactions but have also paved the way for significant breakthroughs in various scientific domains. Aim of review: This review is centered on elucidating the profound impact of AI, especially deep learning, in the field of gravitational wave data analysis (GWDA). We aim to highlight the challenges faced by traditional GWDA methodologies and how AI emerges as a beacon of hope, promising enhanced accuracy, real-time processing, and adaptability. Key scientific concepts of review: Gravitational wave (GW) waveform modeling stands as a cornerstone in the realm of GW research, serving as a sophisticated method to simulate and interpret the intricate patterns and signatures of these cosmic phenomena. This modeling provides a deep understanding of the astrophysical events that produce gravitational waves. Next in line is GW signal detection, a refined technique that meticulously combs through extensive datasets, distinguishing genuine gravitational wave signals from the cacophony of background noise. This detection process is pivotal in ensuring the authenticity of observed events. Complementing this is the GW parameter estimation, a method intricately designed to decode the detected signals, extracting crucial parameters that offer insights into the properties and origins of the waves. Lastly, the integration of AI for GW science has emerged as a transformative force. AI methodologies harness vast computational power and advanced algorithms to enhance the efficiency, accuracy, and adaptability of data analysis in GW research, heralding a new era of innovation and discovery in the field

    Multioutput regression of noisy time series using convolutional neural networks with applications to gravitational waves

    Get PDF
    In this thesis I implement a deep learning algorithm to perform a multioutput regression. The dataset is a collection of one dimensional time series arrays, corresponding to simulated gravitational waveforms emitted by a black hole binary, and labelled by the masses of the two black holes. In addition, white Gaussian noise is added to the arrays, to simulate a signal detection in the presence of noise. A convolutional neural network is trained to infer the output labels in the presence of noise, and the resulting model generalizes over many order of magnitudes in the noise level. From the results I argue that the hidden layers of the model succesfully denoise the signals before the inference step. The entire code is implemeted in the form of a Python module, and the neural network is written in PyTorch. The training of the network is speeded up using a single GPU, and I report about efforts to improve the scaling of the training time with respect to the size of the training sample
    • …
    corecore