869 research outputs found

    Formal Synthesis of Control Strategies for Positive Monotone Systems

    Full text link
    We design controllers from formal specifications for positive discrete-time monotone systems that are subject to bounded disturbances. Such systems are widely used to model the dynamics of transportation and biological networks. The specifications are described using signal temporal logic (STL), which can express a broad range of temporal properties. We formulate the problem as a mixed-integer linear program (MILP) and show that under the assumptions made in this paper, which are not restrictive for traffic applications, the existence of open-loop control policies is sufficient and almost necessary to ensure the satisfaction of STL formulas. We establish a relation between satisfaction of STL formulas in infinite time and set-invariance theories and provide an efficient method to compute robust control invariant sets in high dimensions. We also develop a robust model predictive framework to plan controls optimally while ensuring the satisfaction of the specification. Illustrative examples and a traffic management case study are included.Comment: To appear in IEEE Transactions on Automatic Control (TAC) (2018), 16 pages, double colum

    Face Liveness Detection for Biometric Antispoofing Applications using Color Texture and Distortion Analysis Features

    Get PDF
    Face recognition is a widely used biometric approach. Face recognition technology has developed rapidly in recent years and it is more direct, user friendly and convenient compared to other methods. But face recognition systems are vulnerable to spoof attacks made by non-real faces. It is an easy way to spoof face recognition systems by facial pictures such as portrait photographs. A secure system needs Liveness detection in order to guard against such spoofing. In this work, face liveness detection approaches are categorized based on the various types techniques used for liveness detection. This categorization helps understanding different spoof attacks scenarios and their relation to the developed solutions. A review of the latest works regarding face liveness detection works is presented. The main aim is to provide a simple path for the future development of novel and more secured face liveness detection approach

    Scalable Synthesis and Verification: Towards Reliable Autonomy

    Get PDF
    We have seen the growing deployment of autonomous systems in our daily life, ranging from safety-critical self-driving cars to dialogue agents. While impactful and impressive, these systems do not often come with guarantees and are not rigorously evaluated for failure cases. This is in part due to the limited scalability of tools available for designing correct-by-construction systems, or verifying them posthoc. Another key limitation is the lack of availability of models for the complex environments with which autonomous systems often have to interact with. In the direction of overcoming these above mentioned bottlenecks to designing reliable autonomous systems, this thesis makes contributions along three fronts. First, we develop an approach for parallelized synthesis from linear-time temporal logic Specifications corresponding to the generalized reactivity (1) fragment. We begin by identifying a special case corresponding to singleton liveness goals that allows for a decomposition of the synthesis problem, which facilitates parallelized synthesis. Based on the intuition from this special case, we propose a more generalized approach for parallelized synthesis that relies on identifying equicontrollable states. Second, we consider learning-based approaches to enable verification at scale for complex systems, and for autonomous systems that interact with black-box environments. For the former, we propose a new abstraction refinement procedure based on machine learning to improve the performance of nonlinear constraint solving algorithms on large-scale problems. For the latter, we present a data-driven approach based on chance-constrained optimization that allows for a system to be evaluated for specification conformance without an accurate model of the environment. We demonstrate this approach on several tasks, including a lane-change scenario with real-world driving data. Lastly, we consider the problem of interpreting and verifying learning-based components such as neural networks. We introduce a new method based on Craig's interpolants for computing compact symbolic abstractions of pre-images for neural networks. Our approach relies on iteratively computing approximations that provably overapproximate and underapproximate the pre-images at all layers. Further, building on existing work for training neural networks for verifiability in the classification setting, we propose extensions that allow us to generalize the approach to more general architectures and temporal specifications.</p

    Chainspace: A Sharded Smart Contracts Platform

    Full text link
    Chainspace is a decentralized infrastructure, known as a distributed ledger, that supports user defined smart contracts and executes user-supplied transactions on their objects. The correct execution of smart contract transactions is verifiable by all. The system is scalable, by sharding state and the execution of transactions, and using S-BAC, a distributed commit protocol, to guarantee consistency. Chainspace is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT), and extremely high-auditability, non-repudiation and `blockchain' techniques. Even when BFT fails, auditing mechanisms are in place to trace malicious participants. We present the design, rationale, and details of Chainspace; we argue through evaluating an implementation of the system about its scaling and other features; we illustrate a number of privacy-friendly smart contracts for smart metering, polling and banking and measure their performance

    Levels of Decentralization and Trust in Cryptocurrencies: Consensus, Governance and Applications

    Get PDF
    Since the apparition of Bitcoin, decentralization has become an ideal praised almost religiously. Indeed, removing the need for a central authority prevents many forms of abuse that could be performed by a trusted third party, especially when there are no transparency and accountability mechanisms in place. Decentralization is however a very subtle concept that has limits. In this thesis, we look at the decentralization of blockchains at three different levels. First we look at the consensus protocol, which is the heart of any decentralized system. The Nakamoto protocol, used by Bitcoin, has been shown to induce centralization through the shift to mining pools. Additionally, it is heavily criticized for the enormous amount of energy it requires. We propose a protocol, Fantômette, that incorporates incentives at its core and that consumes much less energy than Bitcoin and other proof-of-work based cryptocurrencies. If the consensus protocol makes it possible to decentralize the enforcement of rules in a cryptocurrency, there is still the question of who decides on the rules. Indeed, if a central authority is able to determine what those rules are then the fact that they are enforced in a decentralized way does not make it a decentralized system. We study the governance structure of Bitcoin and Ethereum by making measurements of their GitHub repositories and providing quantitative ways to compare their level of centralization by using appropriate metrics based on centrality measures. Finally, many applications are now built on top of blockchains. These can also induce or straightforwardly lead to centralization, for example by requiring that users register their identities to comply with regulations. We show how identities can be registered on blockchains in a decentralized and privacy-preserving way
    corecore