832 research outputs found
Current and Future Challenges in Knowledge Representation and Reasoning
Knowledge Representation and Reasoning is a central, longstanding, and active
area of Artificial Intelligence. Over the years it has evolved significantly;
more recently it has been challenged and complemented by research in areas such
as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl
Perspectives workshop was held on Knowledge Representation and Reasoning. The
goal of the workshop was to describe the state of the art in the field,
including its relation with other areas, its shortcomings and strengths,
together with recommendations for future progress. We developed this manifesto
based on the presentations, panels, working groups, and discussions that took
place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge
Representation: its origins, goals, milestones, and current foci; its relation
to other disciplines, especially to Artificial Intelligence; and on its
challenges, along with key priorities for the next decade
Layered controller synthesis for dynamic multi-agent systems
In this paper we present a layered approach for multi-agent control problem,
decomposed into three stages, each building upon the results of the previous
one. First, a high-level plan for a coarse abstraction of the system is
computed, relying on parametric timed automata augmented with stopwatches as
they allow to efficiently model simplified dynamics of such systems. In the
second stage, the high-level plan, based on SMT-formulation, mainly handles the
combinatorial aspects of the problem, provides a more dynamically accurate
solution. These stages are collectively referred to as the SWA-SMT solver. They
are correct by construction but lack a crucial feature: they cannot be executed
in real time. To overcome this, we use SWA-SMT solutions as the initial
training dataset for our last stage, which aims at obtaining a neural network
control policy. We use reinforcement learning to train the policy, and show
that the initial dataset is crucial for the overall success of the method
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Efficient Laconic Cryptography from Learning With Errors
Laconic cryptography is an emerging paradigm that enables cryptographic primitives with sublinear communication complexity in just two messages. In particular, a two-message protocol between Alice and Bob is called laconic if its communication and computation complexity are essentially independent of the size of Alice\u27s input. This can be thought of as a dual notion of fully-homomorphic encryption, as it enables Bob-optimized protocols. This paradigm has led to tremendous progress in recent years. However, all existing constructions of laconic primitives are considered only of theoretical interest: They all rely on non-black-box cryptographic techniques, which are highly impractical.
This work shows that non-black-box techniques are not necessary for basic laconic cryptography primitives. We propose a completely algebraic construction of laconic encryption, a notion that we introduce in this work, which serves as the cornerstone of our framework. We prove that the scheme is secure under the standard Learning With Errors assumption (with polynomial modulus-to-noise ratio). We provide proof-of-concept implementations for the first time for laconic primitives, demonstrating the construction is indeed practical: For a database size of , encryption and decryption are in the order of single digit milliseconds.
Laconic encryption can be used as a black box to construct other laconic primitives. Specifically, we show how to construct:
- Laconic oblivious transfer
- Registration-based encryption scheme
- Laconic private-set intersection protocol
All of the above have essentially optimal parameters and similar practical efficiency. Furthermore, our laconic encryption can be preprocessed such that the online encryption step is entirely combinatorial and therefore much more efficient. Using similar techniques, we also obtain identity-based encryption with an unbounded identity space and tight security proof (in the standard model)
Pre-deployment Analysis of Smart Contracts -- A Survey
Smart contracts are programs that execute transactions involving independent
parties and cryptocurrencies. As programs, smart contracts are susceptible to a
wide range of errors and vulnerabilities. Such vulnerabilities can result in
significant losses. Furthermore, by design, smart contract transactions are
irreversible. This creates a need for methods to ensure the correctness and
security of contracts pre-deployment. Recently there has been substantial
research into such methods. The sheer volume of this research makes
articulating state-of-the-art a substantial undertaking. To address this
challenge, we present a systematic review of the literature. A key feature of
our presentation is to factor out the relationship between vulnerabilities and
methods through properties. Specifically, we enumerate and classify smart
contract vulnerabilities and methods by the properties they address. The
methods considered include static analysis as well as dynamic analysis methods
and machine learning algorithms that analyze smart contracts before deployment.
Several patterns about the strengths of different methods emerge through this
classification process
Software doping â Theory and detection
Software is doped if it contains a hidden functionality that is intentionally included by the manufacturer and is not in the interest of the user or society. This thesis complements this informal definition by a set of formal cleanness definitions that characterise the absence of software doping. These definitions reflect common expectations on clean software behaviour and are applicable to many types of software, from printers to cars to discriminatory AI systems. We use these definitions to propose white-box and black-box analysis techniques to detect software doping. In particular, we present a provably correct, model-based testing algorithm that is intertwined with a probabilistic-falsification-based test input selection technique. We identify and explain how to overcome the challenges that are specific to real-world software doping tests and analyses. The most prominent example of software doping in recent years is the Diesel Emissions Scandal. We demonstrate the strength of our cleanness definitions and analysis techniques by applying them to emission cleaning systems of diesel cars. All our car related research is unified in a Car Data Platform. The mobile app LolaDrives is one building block of this platform; it supports conducting real-driving emissions tests and provides feedback to the user in how far a trip satisfies driving conditions that are defined by official regulations.Software ist gedopt wenn sie eine versteckte FunktionalitĂ€t enthĂ€lt, die vom Hersteller beabsichtigt ist und deren Existenz nicht im Interesse des Benutzers oder der Gesellschaft ist. Die vorliegende Arbeit ergĂ€nzt diese nicht formale Definition um eine Menge von Cleanness-Definitionen, die die Abwesenheit von Software Doping charakterisieren. Diese Definitionen spiegeln allgemeine Erwartungen an "sauberes" Softwareverhalten wider und sie sind auf viele Arten von Software anwendbar, vom Drucker ĂŒber Autos bis hin zu diskriminierenden KI-Systemen. Wir verwenden diese Definitionen um sowohl white-box, als auch black-box Analyseverfahren zur VerfĂŒgung zu stellen, die in der Lage sind Software Doping zu erkennen. Insbesondere stellen wir einen korrekt bewiesenen Algorithmus fĂŒr modellbasierte Tests vor, der eng verflochten ist mit einer Test-Input-Generierung basierend auf einer Probabilistic-Falsification-Technik. Wir identifizieren HĂŒrden hinsichtlich Software-Doping-Tests in der echten Welt und erklĂ€ren, wie diese bewĂ€ltigt werden können. Das bekannteste Beispiel fĂŒr Software Doping in den letzten Jahren ist der Diesel-Abgasskandal. Wir demonstrieren die FĂ€higkeiten unserer Cleanness-Definitionen und Analyseverfahren, indem wir diese auf Abgasreinigungssystem von Dieselfahrzeugen anwenden. Unsere gesamte auto-basierte Forschung kommt in der Car Data Platform zusammen. Die mobile App LolaDrives ist eine Kernkomponente dieser Plattform; sie unterstĂŒtzt bei der DurchfĂŒhrung von Abgasmessungen auf der StraĂe und gibt dem Fahrer Feedback inwiefern eine Fahrt den offiziellen Anforderungen der EU-Norm der Real-Driving Emissions entspricht
Vision transformers with Inductive Bias introduced through self-attention regularization
In recent years, the Transformer achieved remarkable results in computer vision related tasks, matching, or even surpassing those of convolutional neural networks (CNN). However, unlike CNNs, those vision transformers lack strong inductive biases and, to achieve state-of-the-art results, rely on large architectures and extensive pre-training on tens of millions of images. Introducing the appropriate inductive biases to vision transformers can lead to better convergence and generalization on settings with fewer training data. This work presents a novel way to introduce inductive biases to vision transformers: self-attention regularization. Two different methods of self-attention regularization were devised. Furthermore, this work proposes ARViT, a novel vision transformer architecture, where both self-attention regularization methods are deployed. The experimental results demonstrated that self-attention regularization leads to better convergence and generalization, especially on models pre-trained on mid-size datasets.ć”äŸĄć€§
- âŠ