42 research outputs found

    Laying the foundation of the effective-one-body waveform models SEOBNRv5: improved accuracy and efficiency for spinning non-precessing binary black holes

    Full text link
    We present SEOBNRv5HM, a more accurate and faster inspiral-merger-ringdown gravitational waveform model for quasi-circular, spinning, nonprecessing binary black holes within the effective-one-body (EOB) formalism. Compared to its predecessor, SEOBNRv4HM, the waveform model i) incorporates recent high-order post- Newtonian results in the inspiral, with improved resummations, ii) includes the gravitational modes (l, |m|) = (3, 2), (4, 3), in addition to the (2, 2), (3, 3), (2, 1), (4, 4), (5, 5) modes already implemented in SEOBNRv4HM, iii) is calibrated to larger mass-ratios and spins using a catalog of 442 numerical-relativity (NR) simulations and 13 additional waveforms from black-hole perturbation theory, iv) incorporates information from second-order gravitational self-force (2GSF) in the nonspinning modes and radiation-reaction force. Computing the unfaithfulness against NR simulations, we find that for the dominant (2, 2) mode the maximum unfaithfulness in the total mass range 10−300M⊙10-300 M_{\odot} is below 10−310^{-3} for 90% of the cases (38% for SEOBNRv4HM). When including all modes up to l = 5 we find 98% (49%) of the cases with unfaithfulness below 10−2(10−3)10^{-2} (10^{-3}), while these numbers reduce to 88% (5%) when using SEOBNRv4HM. Furthermore, the model shows improved agreement with NR in other dynamical quantities (e.g., the angular momentum flux and binding energy), providing a powerful check of its physical robustness. We implemented the waveform model in a high-performance Python package (pySEOBNR), which leads to evaluation times faster than SEOBNRv4HM by a factor 10 to 50, depending on the configuration, and provides the flexibility to easily include spin-precession and eccentric effects, thus making it the starting point for a new generation of EOBNR waveform models (SEOBNRv5) to be employed for upcoming observing runs of the LIGO-Virgo-KAGRA detectors

    Aisimam - An Artificial immune system based intelligent multiangent model

    Get PDF
    The goal of this thesis is to develop a biological model for multiagent systems. This thesis explores artificial immune systems, a novel evolutionary paradigm based on the immunological principles. Artificial Immune systems (AIS) are found to be powerful to solve complex computational tasks. The main focus of the thesis is to develop a generic mathematical model that uses the principles of the human immune system in multiagent systems (MAS). The components and properties of the human immune system are studied. On understanding the concepts of A/5, a literature survey of multiagent systems is performed to understand and compare the multiagent concepts and AIS concepts. An analogy between the immune system parameters and the agent theory was derived. Then, an intelligent multiagent model named AISIMAM is derived. It exploits several properties and features of the immune system in multiagent systems. In other words, the intelligence of the immune systems to kill the antigen and the characteristics of the agents are combined in the model. The model is expressed in terms of mathematical expressions. The model is applied to a specific application namely the mine detection and defusion. The simulations are done in MATLAB that runs on a PC. The experimental results of AISIMAM applied to the mine detection problem are discussed. The results are successful and shows that AISIMAM could be an alternative solution to agent based problems. Artificial Immune System is also applied to a pattern recognition problem. The problem experimented is a color image classification problem useful in a real time industrial application. The images are those of wooden components that need to be classified according to the color and type of wood. To solve the classification task, a simple negative selection and genetic algorithm based A/5 algorithm was developed and simulated. The results are compared with the radial basis function approach applied to the same set of input images

    A framework to support automation in manufacturing through the study of process variability

    Get PDF
    In manufacturing, automation has replaced many dangerous, mundane, arduous and routine manual operations, for example, transportation of heavy parts, stamping of large parts, repetitive welding and bolt fastening. However, skilled operators still carry out critical manual processes in various industries such as aerospace, automotive and heavy-machinery. As automation technology progresses through more flexible and intelligent systems, the potential for these processes to be automated increases. However, the decision to undertake automation is a complex one, involving consideration of many factors such as return of investment, health and safety, life cycle impact, competitive advantage, and resources and technology availability. A key challenge to manufacturing automation is the ability to adapt to process variability. In manufacturing processes, human operators apply their skills to adapt to variability, in order to meet the product and process specifications or requirements. This thesis is focussed on understanding the ‎variability involved in these manual processes, and how it may influence the automation solution. ‎ Two manual industrial processes in polishing and de-burring of high-value components were observed to evaluate the extent of the variability and how the operators applied their skills to overcome it. Based on the findings from the literature and process studies, a framework was developed to categorise variability in manual manufacturing processes and to suggest a level of automation for the tasks in the processes, based on scores and weights given to the parameters by the user. The novelty of this research lies in the creation of a framework to categorise and evaluate process variability, suggesting an appropriate level of automation. The framework uses five attributes of processes; inputs, outputs, strategy, time and requirements and twelve parameters (quantity, range or interval of variability, interdependency, diversification, number of alternatives, number of actions, patterned actions, concurrency, time restriction, sensorial domain, cognitive requisite and physical requisites) to evaluate variability inherent in the process. The level of automation suggested is obtained through a system of scores and weights for each parameter. The weights were calculated using Analytical Hierarchical Process (AHP) with the help of three experts in manufacturing processes. Finally, this framework was validated through its application to two processes consisting of a lab-based peg-in-a-hole manual process and an industrial process on welding. In addition, the framework was further applied to three processes (two industrial processes and one process simulated in the laboratory) by two subjects for each process to verify the consistency of the results obtained. The results suggest that the framework is robust when applied by different subjects, presenting high similarity in outputs. Moreover, the framework was found to be effective when characterising variability present in the processes where it was applied. The framework was developed and tested in manufacturing of high value components, with high potential to be applied to processes in other industries, for instance, automotive, heavy machinery, pharmaceutical or electronic components, although this would need further investigation. Thus, future work would include the application of the framework in processes in other industries, hence enhancing its robustness and widening its scope of applicability. Additionally, a database would be created to assess the correlation between process variability and the level of automation

    AMS-02 antiprotons and dark matter: Trimmed hints and robust bounds

    Full text link
    Based on 4 yr AMS-02 antiproton data, we present bounds on the dark matter (DM) annihilation cross section vs. mass for some representative final state channels. We use recent cosmic-ray propagation models, a realistic treatment of experimental and theoretical errors, and an updated calculation of input antiproton spectra based on a recent release of the PYTHIA code. We find that reported hints of a DM signal are statistically insignificant; an adequate treatment of errors is crucial for credible conclusions. Antiproton bounds on DM annihilation are among the most stringent ones, probing thermal DM up to the TeV scale. The dependence of the bounds upon propagation models and the DM halo profile is also quantified. A preliminary estimate reaches similar conclusions when applied to the 7 years AMS-02 dataset, but also suggests extra caution as for possible future claims of DM excesses.Comment: v2: 33 pages, 6 figures (two of which in two panels); clarifications and a couple of references added, conclusions unchange

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices

    Reduced Order and Surrogate Models for Gravitational Waves

    Full text link
    We present an introduction to some of the state of the art in reduced order and surrogate modeling in gravitational wave (GW) science. Approaches that we cover include Principal Component Analysis, Proper Orthogonal Decomposition, the Reduced Basis approach, the Empirical Interpolation Method, Reduced Order Quadratures, and Compressed Likelihood evaluations. We divide the review into three parts: representation/compression of known data, predictive models, and data analysis. The targeted audience is that one of practitioners in GW science, a field in which building predictive models and data analysis tools that are both accurate and fast to evaluate, especially when dealing with large amounts of data and intensive computations, are necessary yet can be challenging. As such, practical presentations and, sometimes, heuristic approaches are here preferred over rigor when the latter is not available. This review aims to be self-contained, within reasonable page limits, with little previous knowledge (at the undergraduate level) requirements in mathematics, scientific computing, and other disciplines. Emphasis is placed on optimality, as well as the curse of dimensionality and approaches that might have the promise of beating it. We also review most of the state of the art of GW surrogates. Some numerical algorithms, conditioning details, scalability, parallelization and other practical points are discussed. The approaches presented are to large extent non-intrusive and data-driven and can therefore be applicable to other disciplines. We close with open challenges in high dimension surrogates, which are not unique to GW science.Comment: Invited article for Living Reviews in Relativity. 93 page

    Visual navigation in ants

    Get PDF
    Les remarquables capacités de navigation des insectes nous prouvent à quel point ces " mini-cerveaux " peuvent produire des comportements admirablement robustes et efficaces dans des environnements complexes. En effet, être capable de naviguer de façon efficace et autonome dans un environnement parfois hostile (désert, forêt tropicale) sollicite l'intervention de nombreux processus cognitifs impliquant l'extraction, la mémorisation et le traitement de l'information spatiale préalables à une prise de décision locomotrice orientée dans l'espace. Lors de leurs excursions hors du nid, les insectes tels que les abeilles, guêpes ou fourmis, se fient à un processus d'intégration du trajet, mais également à des indices visuels qui leur permettent de mémoriser des routes et de retrouver certains sites alimentaires familiers et leur nid. L'étude des mécanismes d'intégration du trajet a fait l'objet de nombreux travaux, par contre, nos connaissances à propos de l'utilisation d'indices visuels sont beaucoup plus limitées et proviennent principalement d'études menées dans des environnements artificiellement simplifiés, dont les conclusions sont parfois difficilement transposables aux conditions naturelles. Cette thèse propose une approche intégrative, combinant 1- des études de terrains et de laboratoire conduites sur deux espèces de fourmis spécialistes de la navigation visuelle (Melophorus bagoti et Gigantiops destructor) et 2- des analyses de photos panoramiques prisent aux endroits où les fourmis naviguent qui permettent de quantifier objectivement l'information visuelle accessible à l'insecte. Les résultats convergents obtenus sur le terrain et au laboratoire permettent de montrer que, chez ces deux espèces, les fourmis ne fragmentent pas leur monde visuel en multiples objets indépendants, et donc ne mémorisent pas de 'repères visuels' ou de balises particuliers comme le ferait un être humain. En fait, l'efficacité de leur navigation émergerait de l'utilisation de paramètres visuels étendus sur l'ensemble de leur champ visuel panoramique, incluant repères proximaux comme distaux, sans les individualiser. Contre-intuitivement, de telles images panoramiques, même à basse résolution, fournissent une information spatiale précise et non ambiguë dans les environnements naturels. Plutôt qu'une focalisation sur des repères isolés, l'utilisation de vues dans leur globalité semble être plus efficace pour représenter la complexité des scènes naturelles et être mieux adaptée à la basse résolution du système visuel des insectes. Les photos panoramiques enregistrées peuvent également servir à l'élaboration de modèles navigationnels. Les prédictions de ces modèles sont ici directement comparées au comportement des fourmis, permettant ainsi de tester et d'améliorer les différentes hypothèses envisagées. Cette approche m'a conduit à la conclusion selon laquelle les fourmis utilisent leurs vues panoramiques de façons différentes suivant qu'elles se déplacent en terrain familier ou non. Par exemple, aligner son corps de manière à ce que la vue perçue reproduise au mieux l'information mémorisée est une stratégie très efficace pour naviguer le long d'une route bien connue ; mais n'est d'aucune efficacité si l'insecte se retrouve en territoire nouveau, écarté du chemin familier. Dans ces cas critiques, les fourmis semblent recourir à une seconde stratégie qui consiste à se déplacer vers les régions présentant une ligne d'horizon plus basse que celle mémorisée, ce qui généralement conduit vers le terrain familier. Afin de choisir parmi ces deux différentes stratégies, les fourmis semblent tout simplement se fier au degré de familiarisation avec le panorama perçu. Cette thèse soulève aussi la question de la nature de l'information visuelle mémorisée par les insectes. Le modèle du " snapshot " qui prédomine dans la littérature suppose que les fourmis mémorisent une séquence d'instantanés photographiques placés à différents points le long de leurs routes. A l'inverse, les résultats obtenus dans le présent travail montrent que l'information visuelle mémorisée au bout d'une route (15 mètres) modifie l'information mémorisée à l'autre extrémité de cette même route, ce qui suggère que la connaissance visuelle de l'ensemble de la route soit compactée en une seule et même représentation mémorisée. Cette hypothèse s'accorde aussi avec d'autres de nos résultats montrant que la mémoire visuelle ne s'acquiert pas instantanément, mais se développe et s'affine avec l'expérience répétée. Lorsqu'une fourmi navigue le long de sa route, ses récepteurs visuels sont stimulés de façon continue par une scène évoluant doucement et régulièrement au fur et à mesure du déplacement. Mémoriser un pattern général de stimulations, plutôt qu'une série de " snapshots " indépendants et très ressemblants les uns aux autres, constitue une hypothèse parcimonieuse. Cette hypothèse s'applique en outre particulièrement bien aux modèles en réseaux de neurones, suggérant sa pertinence biologique. Dans l'ensemble, cette thèse s'intéresse à la nature des perceptions et de la mémoire visuelle des fourmis, ainsi qu'à la manière dont elles sont intégrées et traitées afin de produire une réponse navigationnelle appropriée. Nos résultats sont aussi discutés dans le cadre de la cognition comparée. Insectes comme vertébrés ont résolu le même problème qui consiste à naviguer de façon efficace sur terre. A la lumière de la théorie de l'évolution de Darwin, il n'y a 'a priori' aucune raison de penser qu'il existe une forme de transition brutale entre les mécanismes cognitifs des différentes espèces animales. Le fossé marqué entre insectes et vertébrés au sein des sciences cognitives pourrait bien être dû à des approches différentes plutôt qu'à de vraies différences ontologiques. Historiquement, l'étude de la navigation de l'insecte a suivi une approche de type 'bottom-up' qui recherche comment des comportements apparemment complexes peuvent découler de mécanismes simples. Ces solutions parcimonieuses, comme celles explorées dans cette thèse, peuvent fournir de remarquables hypothèses de base pour expliquer la navigation chez d'autres espèces animales aux cerveaux et comportements apparemment plus complexes, contribuant ainsi à une véritable cognition comparée.Navigating efficiently in the outside world requires many cognitive abilities like extracting, memorising, and processing information. The remarkable navigational abilities of insects are an existence proof of how small brains can produce exquisitely efficient, robust behaviour in complex environments. During their foraging trips, insects, like ants or bees, are known to rely on both path integration and learnt visual cues to recapitulate a route or reach familiar places like the nest. The strategy of path integration is well understood, but much less is known about how insects acquire and use visual information. Field studies give good descriptions of visually guided routes, but our understanding of the underlying mechanisms comes mainly from simplified laboratory conditions using artificial, geometrically simple landmarks. My thesis proposes an integrative approach that combines 1- field and lab experiments on two visually guided ant species (Melophorus bagoti and Gigantiops destructor) and 2- an analysis of panoramic pictures recorded along the animal's route. The use of panoramic pictures allows an objective quantification of the visual information available to the animal. Results from both species, in the lab and the field, converged, showing that ants do not segregate their visual world into objects, such as landmarks or discrete features, as a human observers might assume. Instead, efficient navigation seems to arise from the use of cues widespread on the ants' panoramic visual field, encompassing both proximal and distal objects together. Such relatively unprocessed panoramic views, even at low resolution, provide remarkably unambiguous spatial information in natural environment. Using such a simple but efficient panoramic visual input, rather than focusing on isolated landmarks, seems an appropriate strategy to cope with the complexity of natural scenes and the poor resolution of insects' eyes. Also, panoramic pictures can serve as a basis for running analytical models of navigation. The predictions of these models can be directly compared with the actual behaviour of real ants, allowing the iterative tuning and testing of different hypotheses. This integrative approach led me to the conclusion that ants do not rely on a single navigational technique, but might switch between strategies according to whether they are on or off their familiar terrain. For example, ants can recapitulate robustly a familiar route by simply aligning their body in a way that the current view matches best their memory. However, this strategy becomes ineffective when displaced away from the familiar route. In such a case, ants appear to head instead towards the regions where the skyline appears lower than the height recorded in their memory, which generally leads them closer to a familiar location. How ants choose between strategies at a given time might be simply based on the degree of familiarity of the panoramic scene currently perceived. Finally, this thesis raises questions about the nature of ant memories. Past studies proposed that ants memorise a succession of discrete 2D 'snapshots' of their surroundings. Contrastingly, results obtained here show that knowledge from the end of a foraging route (15 m) impacts strongly on the behaviour at the beginning of the route, suggesting that the visual knowledge of a whole foraging route may be compacted into a single holistic memory. Accordingly, repetitive training on the exact same route clearly affects the ants' behaviour, suggesting that the memorised information is processed and not 'obtained at once'. While navigating along their familiar route, ants' visual system is continually stimulated by a slowly evolving scene, and learning a general pattern of stimulation rather than storing independent but very similar snapshots appears a reasonable hypothesis to explain navigation on a natural scale; such learning works remarkably well with neural networks. Nonetheless, what the precise nature of ants' visual memories is and how elaborated they are remain wide open question. Overall, my thesis tackles the nature of ants' perception and memory as well as how both are processed together to output an appropriate navigational response. These results are discussed in the light of comparative cognition. Both vertebrates and insects have resolved the same problem of navigating efficiently in the world. In light of Darwin's theory of evolution, there is no a priori reason to think that there is a clear division between cognitive mechanisms of different species. The actual gap between insect and vertebrate cognitive sciences may result more from different approaches rather than real differences. Research on insect navigation has been approached with a bottom-up philosophy, one that examines how simple mechanisms can produce seemingly complex behaviour. Such parsimonious solutions, like the ones explored in the present thesis, can provide useful baseline hypotheses for navigation in other larger-brained animals, and thus contribute to a more truly comparative cognition

    Advancing the search for gravitational waves using machine learning

    Get PDF
    Over 100 years ago Einstein formulated his now famous theory of General Relativity. In his theory he lays out a set of equations which lead to the beginning of a brand-new astronomical field, Gravitational wave (GW) astronomy. The LIGO-Virgo-KAGRA Collaboration (LVK)’s aim is the detection of GW events from some of the most violent and cataclysmic events in the known universe. The LVK detectors are composed of large-scale Michelson Morley interferometers which are able to detect GWs from a range of sources including: binary black holes (BBHs), binary neutron stars (BNSs), neutron star black holes (NSBHs), supernovae and stochastic GWs. Although these GW events release an incredible amount of energy, the amplitudes of the GWs from such events are also incredibly small. The LVK uses sophisticated techniques such as matched filtering and Bayesian inference in order to both detect and infer source parameters from GW events. Although optimal under many circumstances, these standard methods are computationally expensive to use. Given that the expected number of GW detections by the LVK will be of order 100s in the coming years, there is an urgent need for less computationally expensive detection and parameter inference techniques. A possible solution to reducing the computational expense of such techniques is the exciting field of machine learning (ML). In the first chapter of this thesis, GWs are introduced and it is explained how GWs are detected by the LVK. The sources of GWs are given, as well as methodologies for detecting various source types, such as matched filtering. In addition to GW signal detection techniques, the methods for estimating the parameters of detected GW signals is described (i.e. Bayesian inference). In the second chapter several machine learning algorithms are introduced including: perceptrons, convolutional neural networks (CNNs), autoencoders (AEs), variational autoencoders (VAEs) and conditional variational autoencoders (CVAEs). Practical advice on training/data augmentation techniques is also provided to the reader. In the third chapter, a survey on several ML techniques applied a variety of GW problems are shown. In this thesis, various ML and statistical techniques were deployed such as CVAEs and CNNs in two first-of-their-kind proof-of-principle studies. In the fourth chapter it is described how a CNN may be used to match the sensitivity of matched filtering, the standard technique used by the LVK for detecting GWs. It was shown how a CNN may be trained using simulated BBH waveforms buried in Gaussian noise and signals with Gaussian noise alone. Results of the CNN classification predictions were compared to results from matched filtering given the same testing data as the CNN. In the results it was demonstrated through receiver operating characteristics and efficiency curves that the ML approach is able to achieve the same levels of sensitivity as that of matched filtering. It is also shown that the CNN approach is able to generate predictions in low-latency. Given approximately 25000 GW time series, the CNN is able to produce classification predictions for all 25000 in 1s. In the fifth and sixth chapters, it is shown how CVAEs may be used in order to perform Bayesian inference. A CVAE was trained using simulated BBH waveforms in Gaussian noise, as well as the source parameter values of those waveforms. When testing, the CVAE is only supplied the BBH waveform and is able to produce samples from the Bayesian posterior. Results were compared to that of several standard Bayesian samplers used by the LVK including: Dynesty, ptemcee, emcee, and CPnest. It is shown that when properly trained the CVAE method is able to produce Bayesian posteriors which are consistent with other Bayesian samplers. Results are quantified using a variety of figures of merit such as probability-probability (p-p) plots in order to check the 1-dimensional marginalised posteriors from all approaches are self-consistent with the frequentist perspective. The Jensen—Shannon (JS)-divergence was also employed in order to compute the similarity of different posterior distributions from one another, as well as other figures of merit. It was also demonstrated that the CVAE model was able to produce posteriors with 8000 samples in under a second, representing a 6 order of magnitude increase in performance over traditional sampling methods
    corecore