9 research outputs found

    MLGWSC-1: The first Machine Learning Gravitational-Wave Search Mock Data Challenge

    Get PDF
    We present the results of the first Machine Learning Gravitational-Wave Search Mock Data Challenge (MLGWSC-1). For this challenge, participating groups had to identify gravitational-wave signals from binary black hole mergers of increasing complexity and duration embedded in progressively more realistic noise. The final of the 4 provided datasets contained real noise from the O3a observing run and signals up to a duration of 20 seconds with the inclusion of precession effects and higher order modes. We present the average sensitivity distance and runtime for the 6 entered algorithms derived from 1 month of test data unknown to the participants prior to submission. Of these, 4 are machine learning algorithms. We find that the best machine learning based algorithms are able to achieve up to 95% of the sensitive distance of matched-filtering based production analyses for simulated Gaussian noise at a false-alarm rate (FAR) of one per month. In contrast, for real noise, the leading machine learning search achieved 70%. For higher FARs the differences in sensitive distance shrink to the point where select machine learning submissions outperform traditional search algorithms at FARs ≄200\geq 200 per month on some datasets. Our results show that current machine learning search algorithms may already be sensitive enough in limited parameter regions to be useful for some production settings. To improve the state-of-the-art, machine learning algorithms need to reduce the false-alarm rates at which they are capable of detecting signals and extend their validity to regions of parameter space where modeled searches are computationally expensive to run. Based on our findings we compile a list of research areas that we believe are the most important to elevate machine learning searches to an invaluable tool in gravitational-wave signal detection

    MLGWSC-1: The first Machine Learning Gravitational-Wave Search Mock Data Challenge

    Get PDF
    We present the results of the first Machine Learning Gravitational-Wave Search Mock Data Challenge (MLGWSC-1). For this challenge, participating groups had to identify gravitational-wave signals from binary black hole mergers of increasing complexity and duration embedded in progressively more realistic noise. The final of the 4 provided datasets contained real noise from the O3a observing run and signals up to a duration of 20 seconds with the inclusion of precession effects and higher order modes. We present the average sensitivity distance and runtime for the 6 entered algorithms derived from 1 month of test data unknown to the participants prior to submission. Of these, 4 are machine learning algorithms. We find that the best machine learning based algorithms are able to achieve up to 95% of the sensitive distance of matched-filtering based production analyses for simulated Gaussian noise at a false-alarm rate (FAR) of one per month. In contrast, for real noise, the leading machine learning search achieved 70%. For higher FARs the differences in sensitive distance shrink to the point where select machine learning submissions outperform traditional search algorithms at FARs ≄200\geq 200 per month on some datasets. Our results show that current machine learning search algorithms may already be sensitive enough in limited parameter regions to be useful for some production settings. To improve the state-of-the-art, machine learning algorithms need to reduce the false-alarm rates at which they are capable of detecting signals and extend their validity to regions of parameter space where modeled searches are computationally expensive to run. Based on our findings we compile a list of research areas that we believe are the most important to elevate machine learning searches to an invaluable tool in gravitational-wave signal detection.Comment: 25 pages, 6 figures, 4 tables, additional material available at https://github.com/gwastro/ml-mock-data-challenge-

    OpenDR: An Open Toolkit for Enabling High Performance, Low Footprint Deep Learning for Robotics

    Get PDF
    Existing Deep Learning (DL) frameworks typically do not provide ready-to-use solutions for robotics, where very specific learning, reasoning, and embodiment problems exist. Their relatively steep learning curve and the different methodologies employed by DL compared to traditional approaches, along with the high complexity of DL models, which often leads to the need of employing specialized hardware accelerators, further increase the effort and cost needed to employ DL models in robotics. Also, most of the existing DL methods follow a static inference paradigm, as inherited by the traditional computer vision pipelines, ignoring active perception, which can be employed to actively interact with the environment in order to increase perception accuracy. In this paper, we present the Open Deep Learning Toolkit for Robotics (OpenDR). OpenDR aims at developing an open, non-proprietary, efficient, and modular toolkit that can be easily used by robotics companies and research institutions to efficiently develop and deploy AI and cognition technologies to robotics applications, providing a solid step towards addressing the aforementioned challenges. We also detail the design choices, along with an abstract interface that was created to overcome these challenges. This interface can describe various robotic tasks, spanning beyond traditional DL cognition and inference, as known by existing frameworks, incorporating openness, homogeneity and robotics-oriented perception e.g., through active perception, as its core design principles.acceptedVersionPeer reviewe

    Unveiling PET Hydrolase Surface Dynamics through Fluorescence Microscopy

    No full text
    PET hydrolases are an emerging class of enzymes that are being heavily researched for their use in bioprocessing polyethylene terephthalate (PET). While work has been done in studying the binding of PET oligomers to the active site of these enzymes, the dynamics of PET hydrolases binding to a bulk PET surface is an unexplored area. Here, methods were developed for total internal reflection fluorescence (TIRF) microscopy and fluorescence recovery after photobleaching (FRAP) microscopy to study the adsorption and desorption dynamics of these proteins onto a PET surface. TIRF microscopy was employed to measure both on and off rates of two of the most commonly studied PET hydrolases, PHL7 and LCC, on a PET surface. It was found that these proteins have a much slower off rates on the order of 10-3 s-1, comparable to non-productive binding in enzymes such as cellulose. In combination with FRAP microscopy, a dynamic model is proposed in which adsorption and desorption dominates over lateral diffusion over the surface. The results of this study could have implications for the future engineering of PET hydrolases, either to target them to a PET surface or to modulate interaction with their substrate

    Autoencoder-driven spiral representation learning for gravitational wave surrogate modelling

    No full text
    Recently, artificial neural networks have been gaining momentum in the field of gravitational wave astronomy, for example in surrogate modelling of computationally expensive waveform models for binary black hole inspiral and merger. Surrogate modelling yields fast and accurate approximations of gravitational waves and neural networks have been used in the final step of interpolating the coefficients of the surrogate model for arbitrary waveforms outside the training sample. We investigate the existence of underlying structures in the empirical interpolation coefficients using autoencoders. We demonstrate that when the coefficient space is compressed to only two dimensions, a spiral structure appears, wherein the spiral angle is linearly related to the mass ratio. Based on this finding, we design a spiral module with learnable parameters, that is used as the first layer in a neural network, which learns to map the input space to the coefficients. The spiral module is evaluated on multiple neural network architectures and consistently achieves better speed-accuracy trade-off than baseline models. A thorough experimental study is conducted and the final result is a surrogate model which can evaluate millions of input parameters in a single forward pass in under 1 ms on a desktop GPU, while the mismatch between the corresponding generated waveforms and the ground-truth waveforms is better than the compared baseline methods. We anticipate the existence of analogous underlying structures and corresponding computational gains also in the case of spinning black hole binaries. © 2022 Elsevier B.V

    The development and validation of a health-related quality of life questionnaire for pre-school children with a chronic heart disease

    Full text link
    PURPOSE: Heart diseases are often associated with residual injuries, persisting functional restrictions, and long-term sequelae for psychosocial development. Currently, there are no disease-specific instruments to assess the health-related quality of life (HrQoL) of pre-school children. The aims of this study were to develop a parent proxy instrument to measure the HrQoL of children aged 3-7 years with a heart disease and to confirm its validity and reliability. METHODS: Items from the Preschool Pediatric Cardiac Quality of Life Inventory (P-PCQLI) were generated through focus groups of caregivers. In a pilot study, comprehensibility and feasibility were tested. Five subdimensions were defined theoretically. Psychometric properties were analysed within a multicentre study with 167 parental caregivers. RESULTS: The final 52-item instrument contains a total score covering five moderately inter-correlated dimensions. The total score of the questionnaire showed a very high internal consistency (Cronbachs' α = 0.95). Test-retest correlation was at r tt = 0.96. External validity was indicated by higher correlations (r = 0.24-0.68) with a generic paediatric quality of life questionnaire (KINDL) compared to the Strengths and Difficulties Questionnaire (r = 0.17 to 0.59). Low P-PCQLI total scores were significantly associated with inpatient as opposed to outpatient treatment (t = 6.04, p < .001), with at least moderate disease severity ((t = 5.05, p < .001) NYHA classification) and with poorer prognosis (t = 5.53, p < .001) as estimated by the physician. CONCLUSIONS: The P-PCQLI is reliable and valid for pre-school children with a heart disease. It could be used as a screening instrument in routine care, and for evaluation of HrQoL outcomes in clinical trials and intervention research
    corecore