255 research outputs found

    Capturing Topology in Graph Pattern Matching

    Get PDF
    Graph pattern matching is often defined in terms of subgraph isomorphism, an NP-complete problem. To lower its complexity, various extensions of graph simulation have been considered instead. These extensions allow pattern matching to be conducted in cubic-time. However, they fall short of capturing the topology of data graphs, i.e., graphs may have a structure drastically different from pattern graphs they match, and the matches found are often too large to understand and analyze. To rectify these problems, this paper proposes a notion of strong simulation, a revision of graph simulation, for graph pattern matching. (1) We identify a set of criteria for preserving the topology of graphs matched. We show that strong simulation preserves the topology of data graphs and finds a bounded number of matches. (2) We show that strong simulation retains the same complexity as earlier extensions of simulation, by providing a cubic-time algorithm for computing strong simulation. (3) We present the locality property of strong simulation, which allows us to effectively conduct pattern matching on distributed graphs. (4) We experimentally verify the effectiveness and efficiency of these algorithms, using real-life data and synthetic data.Comment: VLDB201

    DECODE: DilatEd COnvolutional neural network for Detecting Extreme-mass-ratio inspirals

    Full text link
    The detection of Extreme Mass Ratio Inspirals (EMRIs) is intricate due to their complex waveforms, extended duration, and low signal-to-noise ratio (SNR), making them more challenging to be identified compared to compact binary coalescences. While matched filtering-based techniques are known for their computational demands, existing deep learning-based methods primarily handle time-domain data and are often constrained by data duration and SNR. In addition, most existing work ignores time-delay interferometry (TDI) and applies the long-wavelength approximation in detector response calculations, thus limiting their ability to handle laser frequency noise. In this study, we introduce DECODE, an end-to-end model focusing on EMRI signal detection by sequence modeling in the frequency domain. Centered around a dilated causal convolutional neural network, trained on synthetic data considering TDI-1.5 detector response, DECODE can efficiently process a year's worth of multichannel TDI data with an SNR of around 50. We evaluate our model on 1-year data with accumulated SNR ranging from 50 to 120 and achieve a true positive rate of 96.3% at a false positive rate of 1%, keeping an inference time of less than 0.01 seconds. With the visualization of three showcased EMRI signals for interpretability and generalization, DECODE exhibits strong potential for future space-based gravitational wave data analyses.Comment: 13 pages, 5 figures, and 2 table

    Dawning of a New Era in Gravitational Wave Data Analysis: Unveiling Cosmic Mysteries via Artificial Intelligence -- A Systematic Review

    Full text link
    Background: Artificial intelligence (AI), with its vast capabilities, has become an integral part of our daily interactions, particularly with the rise of sophisticated models like Large Language Models. These advancements have not only transformed human-machine interactions but have also paved the way for significant breakthroughs in various scientific domains. Aim of review: This review is centered on elucidating the profound impact of AI, especially deep learning, in the field of gravitational wave data analysis (GWDA). We aim to highlight the challenges faced by traditional GWDA methodologies and how AI emerges as a beacon of hope, promising enhanced accuracy, real-time processing, and adaptability. Key scientific concepts of review: Gravitational wave (GW) waveform modeling stands as a cornerstone in the realm of GW research, serving as a sophisticated method to simulate and interpret the intricate patterns and signatures of these cosmic phenomena. This modeling provides a deep understanding of the astrophysical events that produce gravitational waves. Next in line is GW signal detection, a refined technique that meticulously combs through extensive datasets, distinguishing genuine gravitational wave signals from the cacophony of background noise. This detection process is pivotal in ensuring the authenticity of observed events. Complementing this is the GW parameter estimation, a method intricately designed to decode the detected signals, extracting crucial parameters that offer insights into the properties and origins of the waves. Lastly, the integration of AI for GW science has emerged as a transformative force. AI methodologies harness vast computational power and advanced algorithms to enhance the efficiency, accuracy, and adaptability of data analysis in GW research, heralding a new era of innovation and discovery in the field

    Compact Binary Systems Waveform Generation with Generative Pre-trained Transformer

    Full text link
    Space-based gravitational wave detection is one of the most anticipated gravitational wave (GW) detection projects in the next decade, which will detect abundant compact binary systems. However, the precise prediction of space GW waveforms remains unexplored. To solve the data processing difficulty in the increasing waveform complexity caused by detectors' response and second-generation time-delay interferometry (TDI 2.0), an interpretable pre-trained large model named CBS-GPT (Compact Binary Systems Waveform Generation with Generative Pre-trained Transformer) is proposed. For compact binary system waveforms, three models were trained to predict the waveforms of massive black hole binary (MBHB), extreme mass-ratio inspirals (EMRIs), and galactic binary (GB), achieving prediction accuracies of 98%, 91%, and 99%, respectively. The CBS-GPT model exhibits notable interpretability, with its hidden parameters effectively capturing the intricate information of waveforms, even with complex instrument response and a wide parameter range. Our research demonstrates the potential of large pre-trained models in gravitational wave data processing, opening up new opportunities for future tasks such as gap completion, GW signal detection, and signal noise reduction

    Rotor retaining sleeve design for a 1.12-MW high-speed PM machine

    Get PDF
    Permanent-magnet (PM) synchronous machines (PMSMs) can provide excellent performance in terms of torque density, energy efficiency, and controllability. However, PMs on the rotor are prone to centrifugal force, which may break their physical integrity, particularly at high-speed operation. Typically, PMs are bound with carbon fiber or retained by alloy sleeves on the rotor surface. This paper is concerned with the design of a rotor retaining sleeve for a 1.12-MW 18-kr/min PM machine; its electromagnetic performance is investigated by the 2-D finite-element method (FEM). Theoretical and numerical analyses of the rotor stress are carried out. For the carbon fiber protective measure, the stresses of three PM configurations and three pole filler materials are compared in terms of operating temperature, rotor speed, retaining sleeve thickness, and interference fit. Then, a new hybrid protective measure is proposed and analyzed by the 2-D FEM for operational speeds up to 22 kr/min (1.2 times the rated speed). The rotor losses and machine temperatures with the carbon fiber retaining sleeve and the hybrid retaining sleeve are compared, and the sleeve design is refined. Two rotors using both designs are prototyped and experimentally tested to validate the effectiveness of the developed techniques for PM machines. The developed retaining sleeve makes it possible to operate megawatt PM machines at high speeds of 22 kr/min. This opens doors for many high-power high-speed applications such as turbo-generator, aerospace, and submarine motor drives
    • ā€¦
    corecore