253,896 research outputs found

    Anomaly Detection for Resonant New Physics with Machine Learning

    Full text link
    Despite extensive theoretical motivation for physics beyond the Standard Model (BSM) of particle physics, searches at the Large Hadron Collider (LHC) have found no significant evidence for BSM physics. Therefore, it is essential to broaden the sensitivity of the search program to include unexpected scenarios. We present a new model-agnostic anomaly detection technique that naturally benefits from modern machine learning algorithms. The only requirement on the signal for this new procedure is that it is localized in at least one known direction in phase space. Any other directions of phase space that are uncorrelated with the localized one can be used to search for unexpected features. This new method is applied to the dijet resonance search to show that it can turn a modest 2 sigma excess into a 7 sigma excess for a model with an intermediate BSM particle that is not currently targeted by a dedicated search.Comment: Replaced with short PRL version. 7 pages, 2 figures. Revised long version will be submitted separatel

    A Deep Dive into Machine Learning Density Functional Theory for Materials Science and Chemistry

    Full text link
    With the growth of computational resources, the scope of electronic structure simulations has increased greatly. Artificial intelligence and robust data analysis hold the promise to accelerate large-scale simulations and their analysis to hitherto unattainable scales. Machine learning is a rapidly growing field for the processing of such complex datasets. It has recently gained traction in the domain of electronic structure simulations, where density functional theory takes the prominent role of the most widely used electronic structure method. Thus, DFT calculations represent one of the largest loads on academic high-performance computing systems across the world. Accelerating these with machine learning can reduce the resources required and enables simulations of larger systems. Hence, the combination of density functional theory and machine learning has the potential to rapidly advance electronic structure applications such as in-silico materials discovery and the search for new chemical reaction pathways. We provide the theoretical background of both density functional theory and machine learning on a generally accessible level. This serves as the basis of our comprehensive review including research articles up to December 2020 in chemistry and materials science that employ machine-learning techniques. In our analysis, we categorize the body of research into main threads and extract impactful results. We conclude our review with an outlook on exciting research directions in terms of a citation analysis

    Chapter 19 Unsupervised Methods

    Get PDF
    The Handbook of Computational Social Science is a comprehensive reference source for scholars across multiple disciplines. It outlines key debates in the field, showcasing novel statistical modeling and machine learning methods, and draws from specific case studies to demonstrate the opportunities and challenges in CSS approaches. The Handbook is divided into two volumes written by outstanding, internationally renowned scholars in the field. This second volume focuses on foundations and advances in data science, statistical modeling, and machine learning. It covers a range of key issues, including the management of big data in terms of record linkage, streaming, and missing data. Machine learning, agent-based and statistical modeling, as well as data quality in relation to digital trace and textual data, as well as probability, non-probability, and crowdsourced samples represent further foci. The volume not only makes major contributions to the consolidation of this growing research field, but also encourages growth into new directions. With its broad coverage of perspectives (theoretical, methodological, computational), international scope, and interdisciplinary approach, this important resource is integral reading for advanced undergraduates, postgraduates, and researchers engaging with computational methods across the social sciences, as well as those within the scientific and engineering sectors

    A Survey on Few-Shot Class-Incremental Learning

    Full text link
    Large deep learning models are impressive, but they struggle when real-time data is not available. Few-shot class-incremental learning (FSCIL) poses a significant challenge for deep neural networks to learn new tasks from just a few labeled samples without forgetting the previously learned ones. This setup easily leads to catastrophic forgetting and overfitting problems, severely affecting model performance. Studying FSCIL helps overcome deep learning model limitations on data volume and acquisition time, while improving practicality and adaptability of machine learning models. This paper provides a comprehensive survey on FSCIL. Unlike previous surveys, we aim to synthesize few-shot learning and incremental learning, focusing on introducing FSCIL from two perspectives, while reviewing over 30 theoretical research studies and more than 20 applied research studies. From the theoretical perspective, we provide a novel categorization approach that divides the field into five subcategories, including traditional machine learning methods, meta-learning based methods, feature and feature space-based methods, replay-based methods, and dynamic network structure-based methods. We also evaluate the performance of recent theoretical research on benchmark datasets of FSCIL. From the application perspective, FSCIL has achieved impressive achievements in various fields of computer vision such as image classification, object detection, and image segmentation, as well as in natural language processing and graph. We summarize the important applications. Finally, we point out potential future research directions, including applications, problem setups, and theory development. Overall, this paper offers a comprehensive analysis of the latest advances in FSCIL from a methodological, performance, and application perspective

    A Survey on Few-Shot Class-Incremental Learning

    Get PDF
    Large deep learning models are impressive, but they struggle when real-time data is not available. Few-shot class-incremental learning (FSCIL) poses a significant challenge for deep neural networks to learn new tasks from just a few labeled samples without forgetting the previously learned ones. This setup can easily leads to catastrophic forgetting and overfitting problems, severely affecting model performance. Studying FSCIL helps overcome deep learning model limitations on data volume and acquisition time, while improving practicality and adaptability of machine learning models. This paper provides a comprehensive survey on FSCIL. Unlike previous surveys, we aim to synthesize few-shot learning and incremental learning, focusing on introducing FSCIL from two perspectives, while reviewing over 30 theoretical research studies and more than 20 applied research studies. From the theoretical perspective, we provide a novel categorization approach that divides the field into five subcategories, including traditional machine learning methods, meta learning-based methods, feature and feature space-based methods, replay-based methods, and dynamic network structure-based methods. We also evaluate the performance of recent theoretical research on benchmark datasets of FSCIL. From the application perspective, FSCIL has achieved impressive achievements in various fields of computer vision such as image classification, object detection, and image segmentation, as well as in natural language processing and graph. We summarize the important applications. Finally, we point out potential future research directions, including applications, problem setups, and theory development. Overall, this paper offers a comprehensive analysis of the latest advances in FSCIL from a methodological, performance, and application perspective
    • …
    corecore