1,484 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
"Le present est plein de l’avenir, et chargé du passé" : Vorträge des XI. Internationalen Leibniz-Kongresses, 31. Juli – 4. August 2023, Leibniz Universität Hannover, Deutschland. Band 2
[No abstract available]Deutschen Forschungsgemeinschaft (DFG)/Projektnr. 517991912VGH VersicherungNiedersächsisches Ministerium für Wissenschaft und Kultur (MWK
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Automatic Generation of Personalized Recommendations in eCoaching
Denne avhandlingen omhandler eCoaching for personlig livsstilsstøtte i sanntid ved bruk av informasjons- og kommunikasjonsteknologi. Utfordringen er å designe, utvikle og teknisk evaluere en prototyp av en intelligent eCoach som automatisk genererer personlige og evidensbaserte anbefalinger til en bedre livsstil. Den utviklede løsningen er fokusert på forbedring av fysisk aktivitet. Prototypen bruker bærbare medisinske aktivitetssensorer. De innsamlede data blir semantisk representert og kunstig intelligente algoritmer genererer automatisk meningsfulle, personlige og kontekstbaserte anbefalinger for mindre stillesittende tid. Oppgaven bruker den veletablerte designvitenskapelige forskningsmetodikken for å utvikle teoretiske grunnlag og praktiske implementeringer. Samlet sett fokuserer denne forskningen på teknologisk verifisering snarere enn klinisk evaluering.publishedVersio
A Symbolic Language for Interpreting Decision Trees
The recent development of formal explainable AI has disputed the folklore
claim that "decision trees are readily interpretable models", showing different
interpretability queries that are computationally hard on decision trees, as
well as proposing different methods to deal with them in practice. Nonetheless,
no single explainability query or score works as a "silver bullet" that is
appropriate for every context and end-user. This naturally suggests the
possibility of "interpretability languages" in which a wide variety of queries
can be expressed, giving control to the end-user to tailor queries to their
particular needs. In this context, our work presents ExplainDT, a symbolic
language for interpreting decision trees. ExplainDT is rooted in a carefully
constructed fragment of first-ordered logic that we call StratiFOILed.
StratiFOILed balances expressiveness and complexity of evaluation, allowing for
the computation of many post-hoc explanations--both local (e.g., abductive and
contrastive explanations) and global ones (e.g., feature relevancy)--while
remaining in the Boolean Hierarchy over NP. Furthermore, StratiFOILed queries
can be written as a Boolean combination of NP-problems, thus allowing us to
evaluate them in practice with a constant number of calls to a SAT solver. On
the theoretical side, our main contribution is an in-depth analysis of the
expressiveness and complexity of StratiFOILed, while on the practical side, we
provide an optimized implementation for encoding StratiFOILed queries as
propositional formulas, together with an experimental study on its efficiency
Inconsistency Handling in Prioritized Databases with Universal Constraints: Complexity Analysis and Links with Active Integrity Constraints
This paper revisits the problem of repairing and querying inconsistent
databases equipped with universal constraints. We adopt symmetric difference
repairs, in which both deletions and additions of facts can be used to restore
consistency, and suppose that preferred repair actions are specified via a
binary priority relation over (negated) facts. Our first contribution is to
show how existing notions of optimal repairs, defined for simpler denial
constraints and repairs solely based on fact deletion, can be suitably extended
to our richer setting. We next study the computational properties of the
resulting repair notions, in particular, the data complexity of repair checking
and inconsistency-tolerant query answering. Finally, we clarify the
relationship between optimal repairs of prioritized databases and repair
notions introduced in the framework of active integrity constraints. In
particular, we show that Pareto-optimal repairs in our setting correspond to
founded, grounded and justified repairs w.r.t. the active integrity constraints
obtained by translating the prioritized database. Our study also yields useful
insights into the behavior of active integrity constraints.Comment: This is an extended version of a paper appearing at the 20th
International Conference on Principles of Knowledge Representation and
Reasoning (KR 2023). 28 page
Natural type inference
Recently, dynamic language users have started to recognize the value of types in their code. To fulfil this need, many popular dynamic languages have adopted extensions that support type annotations. A prominent example is that of TypeScript which offers a module system, classes, interfaces, and an optional type system on top of JavaScript.
However, providing usable (not too verbose, or complex) types via traditional type inference is more challenging in optional type systems. Motivated by this, we redefine the goal of type inference for optionally typed languages as: infer the maximally natural and sound type, instead of the most general one. By the maximally natural and sound, we refer to a type that (1) is derivable in the type system, and (2) maximally reflects the intention of the programmer with respect to a learnt model.
We formally devise a type inference problem that aids the inference of the maximally natural type. Towards this goal, our problem asks to combine information derived from two sources: (1) from algorithmic type systems using deductive logic-based techniques; and (2) from the source code text using inductive machine learning techniques.
To tackle our formulated problem, we develop two frameworks that combine the two sources of information using mathematical optimization. In the first framework, we formulate the inference problem as a problem in numerical optimization. In the second framework, we map the inference problem into popular problems in discrete optimization: maximum satisfiability (MaxSAT) and Integer Linear Programming (ILP).
Both frameworks are built to be consistent with information derived from the different sources. Moreover, through formal proofs, we validate the soundness and completeness of the developed framework for a core lambda-calculus with named types.
To assess the efficacy of the developed frameworks, we implement them in a tool named Optyper that realizes natural type inference for TypeScript. We evaluate Optyperon TypeSript programs obtained from real world projects. By evaluating our theoretical frameworks we show that, in practice, the combination of logical and natural constraints yields a large improvement in performance over either kind of information individually. Further, we demonstrate that our frameworks out-perform state-of-the-art techniques in type inference to produce natural and sound types
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
The Complexity of Some Geometric Proof Systems
In this Thesis we investigate proof systems based on Integer Linear Programming. These methods inspect the solution space of an unsatisfiable propositional formula and prove that this space contains no integral points.
We begin by proving some size and depth lower bounds for a recent proof system, Stabbing Planes, and along the way introduce some novel methods for doing so.
We then turn to the complexity of propositional contradictions generated uniformly from first order sentences, in Stabbing Planes and Sum-Of-Squares.
We finish by investigating the complexity-theoretic impact of the choice of method of generating these propositional contradictions in Sherali-Adams
- …