41 research outputs found
Making use of fuzzy cognitive maps in agent-based modeling
One of the main challenges in Agent-Based Modeling (ABM) is to model agentsâ preferences and behavioral rules such that the knowledge and decision-making processes of real-life stakeholders will be reflected. To tackle this challenge, we demonstrate the potential use of a participatory method, Fuzzy Cognitive Mapping (FCM), that aggregates agentsâ qualitative knowledge (i.e., knowledge co-production). In our proposed approach, the outcome of FCM would be a basis for designing agentsâ preferences and behavioral rules in ABM. We apply this method to a social-ecological system of a farming community facing water scarcity
Recommended from our members
Combining macula clinical signs and patient characteristics for age-related macular degeneration diagnosis: a machine learning approach
Background: To investigate machine learning methods, ranging from simpler interpretable techniques to complex (non-linear) âblack-boxâ approaches, for automated diagnosis of Age-related Macular Degeneration (AMD).
Methods: Data from healthy subjects and patients diagnosed with AMD or other retinal diseases were collected during routine visits via an Electronic Health Record (EHR) system. Patientsâ attributes included demographics and, for each eye, presence/absence of major AMD-related clinical signs (soft drusen, retinal pigment epitelium, defects/ pigment mottling, depigmentation area, subretinal haemorrhage, subretinal fluid, macula thickness, macular scar, subretinal fibrosis). Interpretable techniques known as white box methods including logistic regression and decision trees as well as less interpreitable techniques known as black box methods, such as support vector machines (SVM), random forests and AdaBoost, were used to develop models (trained and validated on unseen data) to diagnose AMD. The gold standard was confirmed diagnosis of AMD by physicians. Sensitivity, specificity and area under the receiver operating characteristic (AUC) were used to assess performance.
Results: Study population included 487 patients (912 eyes). In terms of AUC, random forests, logistic regression and adaboost showed a mean performance of (0.92), followed by SVM and decision trees (0.90). All machine learning models identified soft drusen and age as the most discriminating variables in cliniciansâ decision pathways to diagnose AMD. C
Conclusions: Both black-box and white box methods performed well in identifying diagnoses of AMD and their decision pathways. Machine learning models developed through the proposed approach, relying on clinical signs identified by retinal specialists, could be embedded into EHR to provide physicians with real time (interpretable) support
A Formal Proof of PAC Learnability for Decision Stumps
We present a formal proof in Lean of probably approximately correct (PAC)
learnability of the concept class of decision stumps. This classic result in
machine learning theory derives a bound on error probabilities for a simple
type of classifier. Though such a proof appears simple on paper, analytic and
measure-theoretic subtleties arise when carrying it out fully formally. Our
proof is structured so as to separate reasoning about deterministic properties
of a learning function from proofs of measurability and analysis of
probabilities.Comment: 13 pages, appeared in Certified Programs and Proofs (CPP) 202
Recommended from our members
New metric improving Bayesian calibration of a multistage approach studying hadron and inclusive jet suppression
We study parton energy-momentum exchange with the quark gluon plasma (QGP) within a multistage approach composed of in-medium Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution at high virtuality, and (linearized) Boltzmann transport formalism at lower virtuality. This multistage simulation is then calibrated in comparison with high-pT charged hadrons, D mesons, and the inclusive jet nuclear modification factors, using Bayesian model-to-data comparison, to extract the virtuality-dependent transverse momentum broadening transport coefficient q. To facilitate this undertaking, we develop a quantitative metric for validating the Bayesian workflow, which is used to analyze the sensitivity of various model parameters to individual observables. The usefulness of this new metric in improving Bayesian model emulation is shown to be highly beneficial for future such analyses
Inclusive Jet and Hadron Suppression in a Multi-Stage Approach
We present a new study of jet interactions in the Quark-Gluon Plasma created
in high-energy heavy-ion collisions, using a multi-stage event generator within
the JETSCAPE framework. We focus on medium-induced modifications in the rate of
inclusive jets and high transverse momentum (high-) hadrons.
Scattering-induced jet energy loss is calculated in two stages: A high
virtuality stage based on the MATTER model, in which scattering of highly
virtual partons modifies the vacuum radiation pattern, and a second stage at
lower jet virtuality based on the LBT model, in which leading partons gain and
lose virtuality by scattering and radiation. Coherence effects that reduce the
medium-induced emission rate in the MATTER phase are also included. The
\trento\ model is used for initial conditions, and the (2+1)D VISHNU model is
used for viscous hydrodynamic evolution. Jet interactions with the medium are
modeled via 2-to-2 scattering with Debye screened potentials, in which the
recoiling partons are tracked, hadronized, and included in the jet clustering.
Holes left in the medium are also tracked and subtracted to conserve transverse
momentum. Calculations of the nuclear modification factor ()
for inclusive jets and high- hadrons are compared to
experimental measurements at RHIC and the LHC. Within this framework, we find
that two parameters for energy-loss, the coupling in the medium and the
transition scale between the stages of jet modification, suffice to
successfully describe these data at all energies, for central and semi-central
collisions, without re-scaling the jet transport coefficient .Comment: 33 pages, 23 figure
Multi-scale evolution of charmed particles in a nuclear medium
Parton energy-momentum exchange with the quark gluon plasma (QGP) is a
multi-scale problem. In this work, we calculate the interaction of charm quarks
with the QGP within the higher twist formalism at high virtuality and high
energy using the MATTER model, while the low virtuality and high energy portion
is treated via a (linearized) Boltzmann Transport (LBT) formalism. Coherence
effect that reduces the medium-induced emission rate in the MATTER model is
also taken into account. The interplay between these two formalisms is studied
in detail and used to produce a good description of the D-meson and charged
hadron nuclear modification factor RAA across multiple centralities. All
calculations were carried out utilizing the JETSCAPE framework
Decision-support tools to build climate resilience against emerging infectious diseases in Europe and beyond
Climate change is one of several drivers of recurrent outbreaks and geographical range expansion of infectious diseases in Europe. We propose a framework for the co-production of policy-relevant indicators and decision-support tools that track past, present, and future climate-induced disease risks across hazard, exposure, and vulnerability domains at the animal, human, and environmental interface. This entails the co-development of early warning and response systems and tools to assess the costs and benefits of climate change adaptation and mitigation measures across sectors, to increase health system resilience at regional and local levels and reveal novel policy entry points and opportunities. Our approach involves multi-level engagement, innovative methodologies, and novel data streams. We take advantage of intelligence generated locally and empirically to quantify effects in areas experiencing rapid urban transformation and heterogeneous climate-induced disease threats. Our goal is to reduce the knowledge-to-action gap by developing an integrated One HealthâClimate Risk framework
An Expanded Evaluation of Protein Function Prediction Methods Shows an Improvement In Accuracy
Background: A major bottleneck in our understanding of the molecular underpinnings of life is the assignment of function to proteins. While molecular experiments provide the most reliable annotation of proteins, their relatively low throughput and restricted purview have led to an increasing role for computational function prediction. However, assessing methods for protein function prediction and tracking progress in the field remain challenging.
Results: We conducted the second critical assessment of functional annotation (CAFA), a timed challenge to assess computational methods that automatically assign protein function. We evaluated 126 methods from 56 research groups for their ability to predict biological functions using Gene Ontology and gene-disease associations using Human Phenotype Ontology on a set of 3681 proteins from 18 species. CAFA2 featured expanded analysis compared with CAFA1, with regards to data set size, variety, and assessment metrics. To review progress in the field, the analysis compared the best methods from CAFA1 to those of CAFA2.
Conclusions: The top-performing methods in CAFA2 outperformed those from CAFA1. This increased accuracy can be attributed to a combination of the growing number of experimental annotations and improved methods for function prediction. The assessment also revealed that the definition of top-performing algorithms is ontology specific, that different performance metrics can be used to probe the nature of accurate predictions, and the relative diversity of predictions in the biological process and human phenotype ontologies. While there was methodological improvement between CAFA1 and CAFA2, the interpretation of results and usefulness of individual methods remain context-dependent
Advances and Open Problems in Federated Learning
Federated learning (FL) is a machine learning setting where many clients
(e.g. mobile devices or whole organizations) collaboratively train a model
under the orchestration of a central server (e.g. service provider), while
keeping the training data decentralized. FL embodies the principles of focused
data collection and minimization, and can mitigate many of the systemic privacy
risks and costs resulting from traditional, centralized machine learning and
data science approaches. Motivated by the explosive growth in FL research, this
paper discusses recent advances and presents an extensive collection of open
problems and challenges.Comment: Published in Foundations and Trends in Machine Learning Vol 4 Issue
1. See: https://www.nowpublishers.com/article/Details/MAL-08