485 research outputs found

    Machine learning-based anomaly detection for radio telescopes

    Get PDF
    Radio telescopes are getting bigger and are generating increasing amounts of data to improve their sensitivity and resolution. The growing system size and resulting complexity increases the likelihood of unexpected events occurring thereby producing datasets containing anomalies. These events include failures in instrument electronics, miscalibrated observations, environmental and astronomical effects such as lightning and solar storms as well as problems in data processing systems among many more. Currently, efforts to diagnose and mitigate these events are performed by human operators, who manually inspect intermediate data products to determine the success or failure of a given observation. The accelerating data-rates coupled with the lack of automation results in operator-based data quality inspection becoming increasingly infeasible.This thesis focuses on applying machine learning-based anomaly detection to spectrograms obtained from the LOFAR telescope for the purpose of System Health Management (SHM). It does this across several chapters, with each chapter focusing on a different aspect of SHM in radio telescopes. We provide an overview of the data processing systems in LOFAR so to create a workflow for SHM that could effectively be integrated into the scientific data processing pipeline

    Pushing the Boundaries of Biomolecule Characterization through Deep Learning

    Get PDF
    The importance of studying biological molecules in living organisms can hardly be overstated as they regulate crucial processes in living matter of all kinds.Their ubiquitous nature makes them relevant for disease diagnosis, drug development, and for our fundamental understanding of the complex systems of biology.However, due to their small size, they scatter too little light on their own to be directly visible and available for study.Thus, it is necessary to develop characterization methods which enable their elucidation even in the regime of very faint signals. Optical systems, utilizing the relatively low intrusiveness of visible light, constitute one such approach of characterization. However, the optical systems currently capable of analyzing single molecules in the nano-sized regime today either require the species of interest to be tagged with visible labels like fluorescence or chemically restrained on a surface to be analyzed.Ergo, there exist effectively no methods of characterizing very small biomolecules under naturally relevant conditions through unobtrusive probing. Nanofluidic Scattering Microscopy is a method introduced in this thesis which bridges this gap by enabling the real-time label-free size-and-weight determination of freely diffusing molecules directly in small nano-sized channels. However, the molecule signals are so faint, and the background noise so complex with high spatial and temporal variation, that standard methods of data analysis are incapable of elucidating the molecules\u27 properties of relevance in any but the least challenging conditions.To remedy the weak signal, and realize the method\u27s full potential, this thesis\u27 focus is the development of a versatile deep-learning based computer-vision platform to overcome the bottleneck of data analysis. We find that said platform has considerably increased speed, accuracy, precision and limit of detection compared to standard methods, constituting even a lower detection limit than any other method of label-free optical characterization currently available. In this regime, hitherto elusive species of biomolecules become accessible for study, potentially opening up entirely new avenues of biological research. These results, along with many others in the context of deep learning for optical microscopy in biological applications, suggest that deep learning is likely to be pivotal in solving the complex image analysis problems of the present and enabling new regimes of study within microscopy-based research in the near future

    Investigation into the applications of genetic algorithms to control engineering

    Get PDF
    Bibliography: pages 117-120.This thesis report presents the results of a study carried out to determine possible uses of genetic algorithms to problems in control engineering. This thesis reviewed the literature on the subject of genetics and genetic algorithms and applied the algorithms to the problems of systems parameter identification and Pl/D controller tuning. More specifically, the study had the following objectives: To investigate possible uses of genetic algorithms to the task of system identification and Pl/D controller tuning. To do an in depth comparison of the proposed uses with orthodox traditional engineering thinking which is based on mathematical optimisation and empirical studies. To draw conclusions and present the findings in the form of a thesis. Genetic algorithms are a class of artificial intelligence methods inspired by the Darwinian principles of natural selection and survival of the fittest. The algorithm encodes potential solutions into chromosome-like data structures that. are evolved using genetic ·operators to determine the optimal solution of the problem. Fundamentally, the evolutionary nature of the algorithm is introduced through the operators called crossover and mutation. Crossover fundamentally takes two strings, selects a crossing point randomly and swaps segments of the strings on either side of the crossover point to create two new individuals. There are three variations of crossover which were considered in this thesis: single point crossover, two point crossover and uniform crossover. It was important that these be given careful consideration since much of the outcome of the algorithm is influenced by both the choice and the amount with which they are applied

    Sparse Bayesian mass-mapping using trans-dimensional MCMC

    Get PDF
    Uncertainty quantification is a crucial step of cosmological mass-mapping that is often ignored. Suggested methods are typically only approximate or make strong assumptions of Gaussianity of the shear field. Probabilistic sampling methods, such as Markov chain Monte Carlo (MCMC), draw samples form a probability distribution, allowing for full and flexible uncertainty quantification, however these methods are notoriously slow and struggle in the high-dimensional parameter spaces of imaging problems. In this work we use, for the first time, a trans-dimensional MCMC sampler for mass-mapping, promoting sparsity in a wavelet basis. This sampler gradually grows the parameter space as required by the data, exploiting the extremely sparse nature of mass maps in wavelet space. The wavelet coefficients are arranged in a tree-like structure, which adds finer scale detail as the parameter space grows. We demonstrate the trans-dimensional sampler on galaxy cluster-scale images where the planar modelling approximation is valid. In high-resolution experiments, this method produces naturally parsimonious solutions, requiring less than 1% of the potential maximum number of wavelet coefficients and still producing a good fit to the observed data. In the presence of noisy data, trans-dimensional MCMC produces a better reconstruction of mass-maps than the standard smoothed Kaiser-Squires method, with the addition that uncertainties are fully quantified. This opens up the possibility for new mass maps and inferences about the nature of dark matter using the new high-resolution data from upcoming weak lensing surveys such as Euclid

    A machine learning approach to parameter inference in gravitational-wave signal analysis

    Get PDF
    openGravitational Wave (GW) physics is now in its golden age thanks to modern interferometers. The fourth observing run is now ongoing with two of the four second-generation detectors, collecting GW signals coming from Compact Binary Coalescences (CBCs). These systems are formed by black holes and/or neutron stars which lose energy and angular momentum in favour of GW emission, spiraling toward each other until they merge. The characteristic waveform has a chirping behaviour, with a frequency increasing with time. These GW signals are gold mines of physical information on the emitting system. The data analysis of these signals has two main aspects: detection and parameter estimation. For what concerns detection, two approaches are used right now: matched filtering, which compares numerical waveform with raw interferometers' output to highlight the signal, and the study of bursts, which highlights the coherence of arbitrary signals in different detectors. Both these techniques need to be fast enough to allow for electromagnetic follow-up with a relatively short delay. The offline parameter inference process is based on Bayesian techniques and is rather lengthy (individual processing Markov Chain Monte Carlo runs can take a month or more). My thesis has the goal of introducing a fast parameter estimation for unmodeled (burst) methods which produce only phenomenological, de-noised waveforms with, at best, a rough estimate of only a few parameters. The implementation of an approach for fast parameter inference in this unmodeled analysis, taking as input the reconstructed waveform, could be extremely useful for multimessenger observations. In this context, Keith et al. (2021a) proposed to use Physics Informed Neural Networks (PINNs) in GW data analysis. These PINNs are a machine learning approach which includes physical prior information in the algorithm itself. Taking a clean chirping waveform as input, the algorithm of Keith et al. (2021a) demonstrated a successful application of this concept and was able to reconstruct the compact object's orbits before coalescence with great detail, starting only from a parameterized Post-Newtonian model. The PINN environment could become a key tool to infer parameters from GW signals with a simple physical ansatz. As part of my thesis work, I reviewed in detail GW physics and the PINN environment and I updated the algorithm described in Keith et al. (2021a). Their ground-breaking work introduces PINNs for the first time in the analysis of GW signals, however it does so without considering some important details. In particular, I noted that the algorithm of Keith et al. (2021a) spans a very constrained parameter space. In this thesis I introduce some of the missing details and I recode the algorithm from scratch. My implementation includes the learning of the phenomenological differential equation that describes the frequency evolution over time of the chirping GW, within a different, but more physical, parameter space. As a test, starting from a waveform as training data, and from the Newtonian approximation of the GW chirp, I infer the chirp mass, the GW phase and the frequency exponent in the differential equation. The resulting algorithm is robust and uses realistic physical conditions. This is a necessary first step to realize parameter inference with PINNs on real gravitational wave data
    • …
    corecore