29 research outputs found

    Identifying electrons with deep learning methods

    Full text link
    Cette thĂšse porte sur les techniques de l’apprentissage machine et leur application Ă  un problĂšme important de la physique des particules expĂ©rimentale: l’identification des Ă©lectrons de signal rĂ©sultant des collisions proton-proton au Grand collisionneur de hadrons. Au chapitre 1, nous fournissons des informations sur le Grand collisionneur de hadrons et expliquons pourquoi il a Ă©tĂ© construit. Nous prĂ©sentons ensuite plus de dĂ©tails sur ATLAS, l’un des plus importants dĂ©tecteurs du Grand collisionneur de hadrons. Ensuite, nous expliquons en quoi consiste la tĂąche d’identification des Ă©lectrons ainsi que l’importance de bien la mener Ă  terme. Enfin, nous prĂ©sentons des informations dĂ©taillĂ©es sur l’ensemble de donnĂ©es que nous utilisons pour rĂ©soudre cette tĂąche d’identification des Ă©lectrons. Au chapitre 2, nous donnons une brĂšve introduction des principes fondamentaux de l’apprentissage machine. AprĂšs avoir dĂ©fini et introduit les diffĂ©rents types de tĂąche d’apprentissage, nous discutons des diverses façons de reprĂ©senter les donnĂ©es d’entrĂ©e. Ensuite, nous prĂ©sentons ce qu’il faut apprendre de ces donnĂ©es et comment y parvenir. Enfin, nous examinons les problĂšmes qui pourraient se prĂ©senter en rĂ©gime de “sur-apprentissage”. Au chapitres 3, nous motivons le choix de l’architecture choisie pour rĂ©soudre notre tĂąche, en particulier pour les sections oĂč des images sĂ©quentielles sont utilisĂ©es comme entrĂ©es. Nous prĂ©sentons ensuite les rĂ©sultats de nos expĂ©riences et montrons que notre modĂšle fonctionne beaucoup mieux que les algorithmes prĂ©sentement utilisĂ©s par la collaboration ATLAS. Enfin, nous discutons des futures orientations afin d’amĂ©liorer davantage nos rĂ©sultats. Au chapitre 4, nous abordons les deux concepts que sont la gĂ©nĂ©ralisation hors distribution et la planĂ©itĂ© de la surface associĂ©e Ă  la fonction de coĂ»t. Nous prĂ©tendons que les algorithmes qui font converger la fonction coĂ»t vers minimum couvrant une rĂ©gion large et plate sont Ă©galement ceux qui offrent le plus grand potentiel de gĂ©nĂ©ralisation pour les tĂąches hors distribution. Nous prĂ©sentons les rĂ©sultats de l’application de ces deux algorithmes Ă  notre ensemble de donnĂ©es et montrons que cela soutient cette affirmation. Nous terminons avec nos conclusions.This thesis is about applying the tools of Machine Learning to an important problem of experimental particle physics: identifying signal electrons after proton-proton collisions at the Large Hadron Collider. In Chapters 1, we provide some information about the Large Hadron Collider and explain why it was built. We give further details about one of the biggest detectors in the Large Hadron Collider, the ATLAS. Then we define what electron identification task is, as well as the importance of solving it. Finally, we give detailed information about our dataset that we use to solve the electron identification task. In Chapters 2, we give a brief introduction to fundamental principles of machine learning. Starting with the definition and types of different learning tasks, we discuss various ways to represent inputs. Then we present what to learn from the inputs as well as how to do it. And finally, we look at the problems that would arise if we “overdo” learning. In Chapters 3, we motivate the choice of the architecture to solve our task, especially for the parts that have sequential images as inputs. We then present the results of our experiments and show that our model performs much better than the existing algorithms that the ATLAS collaboration currently uses. Finally, we discuss future directions to further improve our results. In Chapter 4, we discuss two concepts: out of distribution generalization and flatness of loss surface. We claim that the algorithms, that brings a model into a wide flat minimum of its training loss surface, would generalize better for out of distribution tasks. We give the results of implementing two such algorithms to our dataset and show that it supports our claim. Finally, we end with our conclusions

    French Roadmap for complex Systems 2008-2009

    Get PDF
    This second issue of the French Complex Systems Roadmap is the outcome of the Entretiens de Cargese 2008, an interdisciplinary brainstorming session organized over one week in 2008, jointly by RNSC, ISC-PIF and IXXI. It capitalizes on the first roadmap and gathers contributions of more than 70 scientists from major French institutions. The aim of this roadmap is to foster the coordination of the complex systems community on focused topics and questions, as well as to present contributions and challenges in the complex systems sciences and complexity science to the public, political and industrial spheres

    A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium

    Get PDF
    When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available

    A Statistical Approach to the Alignment of fMRI Data

    Get PDF
    Multi-subject functional Magnetic Resonance Image studies are critical. The anatomical and functional structure varies across subjects, so the image alignment is necessary. We define a probabilistic model to describe functional alignment. Imposing a prior distribution, as the matrix Fisher Von Mises distribution, of the orthogonal transformation parameter, the anatomical information is embedded in the estimation of the parameters, i.e., penalizing the combination of spatially distant voxels. Real applications show an improvement in the classification and interpretability of the results compared to various functional alignment methods
    corecore