39 research outputs found

    Making use of fuzzy cognitive maps in agent-based modeling

    Get PDF
    One of the main challenges in Agent-Based Modeling (ABM) is to model agents’ preferences and behavioral rules such that the knowledge and decision-making processes of real-life stakeholders will be reflected. To tackle this challenge, we demonstrate the potential use of a participatory method, Fuzzy Cognitive Mapping (FCM), that aggregates agents’ qualitative knowledge (i.e., knowledge co-production). In our proposed approach, the outcome of FCM would be a basis for designing agents’ preferences and behavioral rules in ABM. We apply this method to a social-ecological system of a farming community facing water scarcity

    A Formal Proof of PAC Learnability for Decision Stumps

    Full text link
    We present a formal proof in Lean of probably approximately correct (PAC) learnability of the concept class of decision stumps. This classic result in machine learning theory derives a bound on error probabilities for a simple type of classifier. Though such a proof appears simple on paper, analytic and measure-theoretic subtleties arise when carrying it out fully formally. Our proof is structured so as to separate reasoning about deterministic properties of a learning function from proofs of measurability and analysis of probabilities.Comment: 13 pages, appeared in Certified Programs and Proofs (CPP) 202

    Inclusive Jet and Hadron Suppression in a Multi-Stage Approach

    Full text link
    We present a new study of jet interactions in the Quark-Gluon Plasma created in high-energy heavy-ion collisions, using a multi-stage event generator within the JETSCAPE framework. We focus on medium-induced modifications in the rate of inclusive jets and high transverse momentum (high-pTp_{\mathrm{T}}) hadrons. Scattering-induced jet energy loss is calculated in two stages: A high virtuality stage based on the MATTER model, in which scattering of highly virtual partons modifies the vacuum radiation pattern, and a second stage at lower jet virtuality based on the LBT model, in which leading partons gain and lose virtuality by scattering and radiation. Coherence effects that reduce the medium-induced emission rate in the MATTER phase are also included. The \trento\ model is used for initial conditions, and the (2+1)D VISHNU model is used for viscous hydrodynamic evolution. Jet interactions with the medium are modeled via 2-to-2 scattering with Debye screened potentials, in which the recoiling partons are tracked, hadronized, and included in the jet clustering. Holes left in the medium are also tracked and subtracted to conserve transverse momentum. Calculations of the nuclear modification factor (RAAR_{\mathrm{AA}}) for inclusive jets and high-pTp_{\mathrm{T}} hadrons are compared to experimental measurements at RHIC and the LHC. Within this framework, we find that two parameters for energy-loss, the coupling in the medium and the transition scale between the stages of jet modification, suffice to successfully describe these data at all energies, for central and semi-central collisions, without re-scaling the jet transport coefficient q^\hat{q}.Comment: 33 pages, 23 figure

    Multi-scale evolution of charmed particles in a nuclear medium

    Full text link
    Parton energy-momentum exchange with the quark gluon plasma (QGP) is a multi-scale problem. In this work, we calculate the interaction of charm quarks with the QGP within the higher twist formalism at high virtuality and high energy using the MATTER model, while the low virtuality and high energy portion is treated via a (linearized) Boltzmann Transport (LBT) formalism. Coherence effect that reduces the medium-induced emission rate in the MATTER model is also taken into account. The interplay between these two formalisms is studied in detail and used to produce a good description of the D-meson and charged hadron nuclear modification factor RAA across multiple centralities. All calculations were carried out utilizing the JETSCAPE framework

    An Expanded Evaluation of Protein Function Prediction Methods Shows an Improvement In Accuracy

    Get PDF
    Background: A major bottleneck in our understanding of the molecular underpinnings of life is the assignment of function to proteins. While molecular experiments provide the most reliable annotation of proteins, their relatively low throughput and restricted purview have led to an increasing role for computational function prediction. However, assessing methods for protein function prediction and tracking progress in the field remain challenging. Results: We conducted the second critical assessment of functional annotation (CAFA), a timed challenge to assess computational methods that automatically assign protein function. We evaluated 126 methods from 56 research groups for their ability to predict biological functions using Gene Ontology and gene-disease associations using Human Phenotype Ontology on a set of 3681 proteins from 18 species. CAFA2 featured expanded analysis compared with CAFA1, with regards to data set size, variety, and assessment metrics. To review progress in the field, the analysis compared the best methods from CAFA1 to those of CAFA2. Conclusions: The top-performing methods in CAFA2 outperformed those from CAFA1. This increased accuracy can be attributed to a combination of the growing number of experimental annotations and improved methods for function prediction. The assessment also revealed that the definition of top-performing algorithms is ontology specific, that different performance metrics can be used to probe the nature of accurate predictions, and the relative diversity of predictions in the biological process and human phenotype ontologies. While there was methodological improvement between CAFA1 and CAFA2, the interpretation of results and usefulness of individual methods remain context-dependent

    Advances and Open Problems in Federated Learning

    Get PDF
    Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.Comment: Published in Foundations and Trends in Machine Learning Vol 4 Issue 1. See: https://www.nowpublishers.com/article/Details/MAL-08
    corecore