77 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    If interpretability is the answer, what is the question?

    Get PDF
    Due to the ability to model even complex dependencies, machine learning (ML) can be used to tackle a broad range of (high-stakes) prediction problems. The complexity of the resulting models comes at the cost of transparency, meaning that it is difficult to understand the model by inspecting its parameters. This opacity is considered problematic since it hampers the transfer of knowledge from the model, undermines the agency of individuals affected by algorithmic decisions, and makes it more challenging to expose non-robust or unethical behaviour. To tackle the opacity of ML models, the field of interpretable machine learning (IML) has emerged. The field is motivated by the idea that if we could understand the model's behaviour -- either by making the model itself interpretable or by inspecting post-hoc explanations -- we could also expose unethical and non-robust behaviour, learn about the data generating process, and restore the agency of affected individuals. IML is not only a highly active area of research, but the developed techniques are also widely applied in both industry and the sciences. Despite the popularity of IML, the field faces fundamental criticism, questioning whether IML actually helps in tackling the aforementioned problems of ML and even whether it should be a field of research in the first place: First and foremost, IML is criticised for lacking a clear goal and, thus, a clear definition of what it means for a model to be interpretable. On a similar note, the meaning of existing methods is often unclear, and thus they may be misunderstood or even misused to hide unethical behaviour. Moreover, estimating conditional-sampling-based techniques poses a significant computational challenge. With the contributions included in this thesis, we tackle these three challenges for IML. We join a range of work by arguing that the field struggles to define and evaluate "interpretability" because incoherent interpretation goals are conflated. However, the different goals can be disentangled such that coherent requirements can inform the derivation of the respective target estimands. We demonstrate this with the examples of two interpretation contexts: recourse and scientific inference. To tackle the misinterpretation of IML methods, we suggest deriving formal interpretation rules that link explanations to aspects of the model and data. In our work, we specifically focus on interpreting feature importance. Furthermore, we collect interpretation pitfalls and communicate them to a broader audience. To efficiently estimate conditional-sampling-based interpretation techniques, we propose two methods that leverage the dependence structure in the data to simplify the estimation problems for Conditional Feature Importance (CFI) and SAGE. A causal perspective proved to be vital in tackling the challenges: First, since IML problems such as algorithmic recourse are inherently causal; Second, since causality helps to disentangle the different aspects of model and data and, therefore, to distinguish the insights that different methods provide; And third, algorithms developed for causal structure learning can be leveraged for the efficient estimation of conditional-sampling based IML methods.Aufgrund der Fähigkeit, selbst komplexe Abhängigkeiten zu modellieren, kann maschinelles Lernen (ML) zur Lösung eines breiten Spektrums von anspruchsvollen Vorhersageproblemen eingesetzt werden. Die Komplexität der resultierenden Modelle geht auf Kosten der Interpretierbarkeit, d. h. es ist schwierig, das Modell durch die Untersuchung seiner Parameter zu verstehen. Diese Undurchsichtigkeit wird als problematisch angesehen, da sie den Wissenstransfer aus dem Modell behindert, sie die Handlungsfähigkeit von Personen, die von algorithmischen Entscheidungen betroffen sind, untergräbt und sie es schwieriger macht, nicht robustes oder unethisches Verhalten aufzudecken. Um die Undurchsichtigkeit von ML-Modellen anzugehen, hat sich das Feld des interpretierbaren maschinellen Lernens (IML) entwickelt. Dieses Feld ist von der Idee motiviert, dass wir, wenn wir das Verhalten des Modells verstehen könnten - entweder indem wir das Modell selbst interpretierbar machen oder anhand von post-hoc Erklärungen - auch unethisches und nicht robustes Verhalten aufdecken, über den datengenerierenden Prozess lernen und die Handlungsfähigkeit betroffener Personen wiederherstellen könnten. IML ist nicht nur ein sehr aktiver Forschungsbereich, sondern die entwickelten Techniken werden auch weitgehend in der Industrie und den Wissenschaften angewendet. Trotz der Popularität von IML ist das Feld mit fundamentaler Kritik konfrontiert, die in Frage stellt, ob IML tatsächlich dabei hilft, die oben genannten Probleme von ML anzugehen, und ob es überhaupt ein Forschungsgebiet sein sollte: In erster Linie wird an IML kritisiert, dass es an einem klaren Ziel und damit an einer klaren Definition dessen fehlt, was es für ein Modell bedeutet, interpretierbar zu sein. Weiterhin ist die Bedeutung bestehender Methoden oft unklar, so dass sie missverstanden oder sogar missbraucht werden können, um unethisches Verhalten zu verbergen. Letztlich stellt die Schätzung von auf bedingten Stichproben basierenden Verfahren eine erhebliche rechnerische Herausforderung dar. In dieser Arbeit befassen wir uns mit diesen drei grundlegenden Herausforderungen von IML. Wir schließen uns der Argumentation an, dass es schwierig ist, "Interpretierbarkeit" zu definieren und zu bewerten, weil inkohärente Interpretationsziele miteinander vermengt werden. Die verschiedenen Ziele lassen sich jedoch entflechten, sodass kohärente Anforderungen die Ableitung der jeweiligen Zielgrößen informieren. Wir demonstrieren dies am Beispiel von zwei Interpretationskontexten: algorithmischer Regress und wissenschaftliche Inferenz. Um der Fehlinterpretation von IML-Methoden zu begegnen, schlagen wir vor, formale Interpretationsregeln abzuleiten, die Erklärungen mit Aspekten des Modells und der Daten verknüpfen. In unserer Arbeit konzentrieren wir uns speziell auf die Interpretation von sogenannten Feature Importance Methoden. Darüber hinaus tragen wir wichtige Interpretationsfallen zusammen und kommunizieren sie an ein breiteres Publikum. Zur effizienten Schätzung auf bedingten Stichproben basierender Interpretationstechniken schlagen wir zwei Methoden vor, die die Abhängigkeitsstruktur in den Daten nutzen, um die Schätzprobleme für Conditional Feature Importance (CFI) und SAGE zu vereinfachen. Eine kausale Perspektive erwies sich als entscheidend für die Bewältigung der Herausforderungen: Erstens, weil IML-Probleme wie der algorithmische Regress inhärent kausal sind; zweitens, weil Kausalität hilft, die verschiedenen Aspekte von Modell und Daten zu entflechten und somit die Erkenntnisse, die verschiedene Methoden liefern, zu unterscheiden; und drittens können wir Algorithmen, die für das Lernen kausaler Struktur entwickelt wurden, für die effiziente Schätzung von auf bindingten Verteilungen basierenden IML-Methoden verwenden

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Applications of Molecular Dynamics simulations for biomolecular systems and improvements to density-based clustering in the analysis

    Get PDF
    Molecular Dynamics simulations provide a powerful tool to study biomolecular systems with atomistic detail. The key to better understand the function and behaviour of these molecules can often be found in their structural variability. Simulations can help to expose this information that is otherwise experimentally hard or impossible to attain. This work covers two application examples for which a sampling and a characterisation of the conformational ensemble could reveal the structural basis to answer a topical research question. For the fungal toxin phalloidin—a small bicyclic peptide—observed product ratios in different cyclisation reactions could be rationalised by assessing the conformational pre-organisation of precursor fragments. For the C-type lectin receptor langerin, conformational changes induced by different side-chain protonations could deliver an explanation of the pH-dependency in the protein’s calcium-binding. The investigations were accompanied by the continued development of a density-based clustering protocol into a respective software package, which is generally well applicable for the use case of extracting conformational states from Molecular Dynamics data

    Towards a New Ontology of Polling Inaccuracy: The Benefits of Conceiving of Elections as Heterogenous Phenomena for the Study of Pre-election Polling Error

    Get PDF
    A puzzle exists at the heart of pre-election polling. Despite continual methodological improvement and repeated attempts to identify and correct issues laid bare by misprediction, average polling accuracy has not notably improved since the conclusion of the Second World War. In this thesis, I contend that this is the result of a poll-level focus within the study of polling error that is both incommensurate with its evolution over time and the nature of the elections that polls seek to predict. I hold that differences between elections stand as a plausible source of polling error and situate them within a novel four-level model of sources of polling error. By establishing the heterogenous nature of elections as phenomena and its expected impact on polling error, I propose a new election-level ontology through which the inaccuracy of polls can be understood. I test the empirical validity of this new ontology by using a novel multi-level model to analyse error across the most expansive polling dataset assembled to date, encompassing 11,832 in-campaign polls conducted in 497 elections across 83 countries, finding that membership within different elections meaningfully impacts polling error variation. With the empirical validity of my proposed ontology established, I engage in an exploratory analysis of its benefits, finding electoral characteristics to be useful in the prediction of polling error. Ultimately, I conclude that the adoption of a new, multi-level ontology of polling error centred on the importance of electoral heterogeneity not only offers a more comprehensive theoretical account of its sources than current understandings, but is also more specifically tailored to the reality of pre-election polling than existing alternatives. I also contend that it offers pronounced practical benefits, illuminating those circumstances in which polling error is likely to vary

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    The Fifteenth Marcel Grossmann Meeting

    Get PDF
    The three volumes of the proceedings of MG15 give a broad view of all aspects of gravitational physics and astrophysics, from mathematical issues to recent observations and experiments. The scientific program of the meeting included 40 morning plenary talks over 6 days, 5 evening popular talks and nearly 100 parallel sessions on 71 topics spread over 4 afternoons. These proceedings are a representative sample of the very many oral and poster presentations made at the meeting.Part A contains plenary and review articles and the contributions from some parallel sessions, while Parts B and C consist of those from the remaining parallel sessions. The contents range from the mathematical foundations of classical and quantum gravitational theories including recent developments in string theory, to precision tests of general relativity including progress towards the detection of gravitational waves, and from supernova cosmology to relativistic astrophysics, including topics such as gamma ray bursts, black hole physics both in our galaxy and in active galactic nuclei in other galaxies, and neutron star, pulsar and white dwarf astrophysics. Parallel sessions touch on dark matter, neutrinos, X-ray sources, astrophysical black holes, neutron stars, white dwarfs, binary systems, radiative transfer, accretion disks, quasars, gamma ray bursts, supernovas, alternative gravitational theories, perturbations of collapsed objects, analog models, black hole thermodynamics, numerical relativity, gravitational lensing, large scale structure, observational cosmology, early universe models and cosmic microwave background anisotropies, inhomogeneous cosmology, inflation, global structure, singularities, chaos, Einstein-Maxwell systems, wormholes, exact solutions of Einstein's equations, gravitational waves, gravitational wave detectors and data analysis, precision gravitational measurements, quantum gravity and loop quantum gravity, quantum cosmology, strings and branes, self-gravitating systems, gamma ray astronomy, cosmic rays and the history of general relativity

    Review of Particle Physics

    Get PDF
    The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 2,143 new measurements from 709 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. Particle properties and search limits are listed in Summary Tables. We give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders, Probability and Statistics. Among the 120 reviews are many that are new or heavily revised, including a new review on Machine Learning, and one on Spectroscopy of Light Meson Resonances. The Review is divided into two volumes. Volume 1 includes the Summary Tables and 97 review articles. Volume 2 consists of the Particle Listings and contains also 23 reviews that address specific aspects of the data presented in the Listings. The complete Review (both volumes) is published online on the website of the Particle Data Group (pdg.lbl.gov) and in a journal. Volume 1 is available in print as the PDG Book. A Particle Physics Booklet with the Summary Tables and essential tables, figures, and equations from selected review articles is available in print, as a web version optimized for use on phones, and as an Android app.United States Department of Energy (DOE) DE-AC02-05CH11231government of Japan (Ministry of Education, Culture, Sports, Science and Technology)Istituto Nazionale di Fisica Nucleare (INFN)Physical Society of Japan (JPS)European Laboratory for Particle Physics (CERN)United States Department of Energy (DOE

    Models, Simulations, and the Reduction of Complexity

    Get PDF
    Modern science is a model-building activity. But how are models contructed? How are they related to theories and data? How do they explain complex scientific phenomena, and which role do computer simulations play? To address these questions which are highly relevant to scientists as well as to philosophers of science, 8 leading natural, engineering and social scientists reflect upon their modeling work, and 8 philosophers provide a commentary

    ECOS 2012

    Get PDF
    The 8-volume set contains the Proceedings of the 25th ECOS 2012 International Conference, Perugia, Italy, June 26th to June 29th, 2012. ECOS is an acronym for Efficiency, Cost, Optimization and Simulation (of energy conversion systems and processes), summarizing the topics covered in ECOS: Thermodynamics, Heat and Mass Transfer, Exergy and Second Law Analysis, Process Integration and Heat Exchanger Networks, Fluid Dynamics and Power Plant Components, Fuel Cells, Simulation of Energy Conversion Systems, Renewable Energies, Thermo-Economic Analysis and Optimisation, Combustion, Chemical Reactors, Carbon Capture and Sequestration, Building/Urban/Complex Energy Systems, Water Desalination and Use of Water Resources, Energy Systems- Environmental and Sustainability Issues, System Operation/ Control/Diagnosis and Prognosis, Industrial Ecology
    • …
    corecore