377 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Electron Thermal Runaway in Atmospheric Electrified Gases: a microscopic approach
Thesis elaborated from 2018 to 2023 at the Instituto de Astrofísica de Andalucía under the supervision of Alejandro Luque (Granada, Spain) and Nikolai Lehtinen (Bergen, Norway). This thesis presents a new database of atmospheric electron-molecule collision cross sections which was published separately under the DOI :
With this new database and a new super-electron management algorithm which significantly enhances high-energy electron statistics at previously unresolved ratios, the thesis explores general facets of the electron thermal runaway process relevant to atmospheric discharges under various conditions of the temperature and gas composition as can be encountered in the wake and formation of discharge channels
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Quantum XOR and Rabin oblivious transfer
Oblivious transfer is a cryptographic primitive involving two non-trusting communicating parties. Since it is a basic building block for any two-party computation,
it is a quite powerful and important cryptographic functionality and thus topic of
various research investigations in the classical as well as in the quantum setting. It
was unfortunately shown that oblivious transfer can in neither setting be done with
information-theoretic security. However, in the quantum case, it is possible to limit
the cheating probabilities of unrestricted dishonest parties.
The most well-known variant is 1-out-of-2 oblivious transfer, where the sender
sends two bits and the receiver receives one of them without the sender learning
which one was received. While this has been the primary focus of investigations,
there exist other variants of the protocol which have been less studied. This thesis
focuses on two such variants, XOR oblivious transfer and Rabin oblivious transfer.
Different quantum protocols for these two variants are presented and analysed
for their security against cheating parties. Calculating the cheating probabilities in
general for non-interactive XOR oblivious transfer with symmetric states, the optimality of the presented XOR oblivious transfer protocol is shown. Non-interactive
means that there is only one state transmission from the sender to the receiver who
applies a measurement, and no further communication between the parties. We
further extend the concept of XOR oblivious transfer to the sender not sending two
but n bits and analyse the effect of an increasing n on the participants’ cheating
probabilities.
The reversal of oblivious transfer is also looked at; that is, implementing oblivious
transfer in both directions even if only one of the two communicating parties can
send a quantum state and the other one can only measure. We determine the
reversed protocol versions of a 1-out-of-2 and an XOR oblivious transfer protocol
and show that the protocols’ cheating probabilities remain unchanged.
For Rabin oblivious transfer, both protocols using pure states and protocols using
mixed states are investigated. Comparing them to each other, we determine under
which circumstances the protocol with the pure states outperforms the protocol with
the mixed states and vice versa
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum
Certified Knowledge Compilation with Application to Verified Model Counting
Computing many useful properties of Boolean formulas, such as their weighted or unweighted model count, is intractable on general representations. It can become tractable when formulas are expressed in a special form, such as the decision-decomposable, negation normal form (dec-DNNF) . Knowledge compilation is the process of converting a formula into such a form. Unfortunately existing knowledge compilers provide no guarantee that their output correctly represents the original formula, and therefore they cannot validate a model count, or any other computed value.
We present Partitioned-Operation Graphs (POGs), a form that can encode all of the representations used by existing knowledge compilers. We have designed CPOG, a framework that can express proofs of equivalence between a POG and a Boolean formula in conjunctive normal form (CNF).
We have developed a program that generates POG representations from dec-DNNF graphs produced by the state-of-the-art knowledge compiler D4, as well as checkable CPOG proofs certifying that the output POGs are equivalent to the input CNF formulas. Our toolchain for generating and verifying POGs scales to all but the largest graphs produced by D4 for formulas from a recent model counting competition. Additionally, we have developed a formally verified CPOG checker and model counter for POGs in the Lean 4 proof assistant. In doing so, we proved the soundness of our proof framework. These programs comprise the first formally verified toolchain for weighted and unweighted model counting
Processes and diagrams: an integrated and multidisciplinary approach for the education of quantum information science
The background to this thesis is the παιδέια , the education. To educate is a dialecti- cal process that moves from an abstract line of thought, through scientifically designed techniques, into concrete action; and vice versa. We believe that educating today means enabling teachers first and their students second, to be able to read and interpret the complexity of phenomena, to teach them a model for observing this complexity, describing it, analyzing it and, finally, making it their own. In this thesis, we attempt to make sense of these needs by describing an integrated and multidisciplinary pathway, whose diagram- matic language pushes towards the search for a universal approach to science.
An initial educational contribution is thus made to the understanding of the dialectic between disciplines: theoretical physics, experimental physics, computer science, mathe- matics and mathematical logic are presented in their mutual influence, in an attempt to clarify the informational viewpoint on modern physics. The search for this dialectic for educational purposes is, in our opinion, the most significant contribution of the present work.
To address this issue, we sought to build a community of practice on the topics of the second quantum revolution. Guided by the Model of Educational Reconstruction (MER), we built a first course for teacher professional development that would enable teachers to be introduced to quantum computation and quantum communication. The emergence and development of quantum technologies provides the impetus for a deep conceptual change: “a paradigm shift from quantum theory as a theory of microscopic matter to quantum theory as a framework for technological applications and information processing”. This shift is supported, theoretically, by the informational interpretation of the postulates of quantum mechanics: preparation, transformation and measurement are reinterpreted com- putationally as the encoding, processing and decoding of information; and vice versa. In this interpretation, what changes between classical and quantum theory? From a logical point of view, the transition from bit to qubit, from a physical point of view, the laws of composition of systems. We therefore present monoidal categories as a natural theoretical framework for the description of physical systems and processes for quantum and non- quantum computation and communication, demonstrating how this language is suitable for an integrated and multidisciplinary approach.
The cultural impact of the proposal, the fruitful interaction between researchers in physics education and those in the area of theoretical research, and the passion of some teachers made it possible to start a collaboration to build an educational sequence for students. The result of this collaboration is a teaching leaning sequence on quantum technologies for students, led by the MER and based on inquiry-based learning and the modelling- based teaching. Supported by these methodological frameworks, we produced lessons and worksheets all along the way that had the dual task of supporting teachers’ work and students’ learning. They also made it possible to experimentally verify the positive and critical effects of the proposal. The instructional materials constructed, the data analysis and the constant monitoring with the teachers involved, determined the development of a second course for teacher professional development, inspired by the first, based entirely on research. We hope that this attempt at integrated and multidisciplinary approach for the education of quantum information science, based on the concept of compositionality and the diagrammatic model, can be increased and provide inspiration for future educational paths in other disciplines as well
Protecting Systems From Exploits Using Language-Theoretic Security
Any computer program processing input from the user or network must validate the input. Input-handling vulnerabilities occur in programs when the software component responsible for filtering malicious input---the parser---does not perform validation adequately. Consequently, parsers are among the most targeted components since they defend the rest of the program from malicious input. This thesis adopts the Language-Theoretic Security (LangSec) principle to understand what tools and research are needed to prevent exploits that target parsers. LangSec proposes specifying the syntactic structure of the input format as a formal grammar. We then build a recognizer for this formal grammar to validate any input before the rest of the program acts on it. To ensure that these recognizers represent the data format, programmers often rely on parser generators or parser combinators tools to build the parsers. This thesis propels several sub-fields in LangSec by proposing new techniques to find bugs in implementations, novel categorizations of vulnerabilities, and new parsing algorithms and tools to handle practical data formats. To this end, this thesis comprises five parts that tackle various tenets of LangSec. First, I categorize various input-handling vulnerabilities and exploits using two frameworks. First, I use the mismorphisms framework to reason about vulnerabilities. This framework helps us reason about the root causes leading to various vulnerabilities. Next, we built a categorization framework using various LangSec anti-patterns, such as parser differentials and insufficient input validation. Finally, we built a catalog of more than 30 popular vulnerabilities to demonstrate the categorization frameworks. Second, I built parsers for various Internet of Things and power grid network protocols and the iccMAX file format using parser combinator libraries. The parsers I built for power grid protocols were deployed and tested on power grid substation networks as an intrusion detection tool. The parser I built for the iccMAX file format led to several corrections and modifications to the iccMAX specifications and reference implementations. Third, I present SPARTA, a novel tool I built that generates Rust code that type checks Portable Data Format (PDF) files. The type checker I helped build strictly enforces the constraints in the PDF specification to find deviations. Our checker has contributed to at least four significant clarifications and corrections to the PDF 2.0 specification and various open-source PDF tools. In addition to our checker, we also built a practical tool, PDFFixer, to dynamically patch type errors in PDF files. Fourth, I present ParseSmith, a tool to build verified parsers for real-world data formats. Most parsing tools available for data formats are insufficient to handle practical formats or have not been verified for their correctness. I built a verified parsing tool in Dafny that builds on ideas from attribute grammars, data-dependent grammars, and parsing expression grammars to tackle various constructs commonly seen in network formats. I prove that our parsers run in linear time and always terminate for well-formed grammars. Finally, I provide the earliest systematic comparison of various data description languages (DDLs) and their parser generation tools. DDLs are used to describe and parse commonly used data formats, such as image formats. Next, I conducted an expert elicitation qualitative study to derive various metrics that I use to compare the DDLs. I also systematically compare these DDLs based on sample data descriptions available with the DDLs---checking for correctness and resilience
- …