26 research outputs found

    Using Inhabitation in Bounded Combinatory Logic with Intersection Types for Composition Synthesis

    Full text link
    We describe ongoing work on a framework for automatic composition synthesis from a repository of software components. This work is based on combinatory logic with intersection types. The idea is that components are modeled as typed combinators, and an algorithm for inhabitation {\textemdash} is there a combinatory term e with type tau relative to an environment Gamma? {\textemdash} can be used to synthesize compositions. Here, Gamma represents the repository in the form of typed combinators, tau specifies the synthesis goal, and e is the synthesized program. We illustrate our approach by examples, including an application to synthesis from GUI-components.Comment: In Proceedings ITRS 2012, arXiv:1307.784

    Automatic synthesis of component & connector software architectures with bounded combinatory logic

    Get PDF
    Combinatory logic synthesis is a new type-based approach towards automatic synthesis of software from components in a repository. In this thesis we show how the type-based approach can naturally be used to exploit taxonomic conceptual structures in software architectures and component repositories to enable automatic composition and configuration of components, and also code generation, by associating taxonomic concepts to architectural building blocks such as, in particular, software connectors. Components of a repository are exposed for synthesis as typed combinators, where intersection types are used to represent concepts that specify intended usage and functionality of a component. An algorithm for solving the type inhabitation problem in combinatory logic - does there exist a composition of combinators with a given type? - is then used to automate the retrieval, composition, and configuration of suitable building blocks with respect to a goal specification. Since type inhabitation has high computational complexity, heuristic optimizations for the inhabitation algorithm are essential for making the approach practical. We discuss particularly important (theoretical and pragmatic) optimization strategies and evaluate them by experiments. Furthermore, we apply this synthesis approach to define a method for software connector synthesis for realistic software architectures based on a type theoretic model. We conduct experiments with a rapid prototyping tool that employs this method on complex concrete ERP- and e-Commerce-systems and discuss the results

    Mixin Composition Synthesis based on Intersection Types

    Full text link
    We present a method for synthesizing compositions of mixins using type inhabitation in intersection types. First, recursively defined classes and mixins, which are functions over classes, are expressed as terms in a lambda calculus with records. Intersection types with records and record-merge are used to assign meaningful types to these terms without resorting to recursive types. Second, typed terms are translated to a repository of typed combinators. We show a relation between record types with record-merge and intersection types with constructors. This relation is used to prove soundness and partial completeness of the translation with respect to mixin composition synthesis. Furthermore, we demonstrate how a translated repository and goal type can be used as input to an existing framework for composition synthesis in bounded combinatory logic via type inhabitation. The computed result is a class typed by the goal type and generated by a mixin composition applied to an existing class

    A p/2p/2 Adversary Power Resistant Blockchain Sharding Approach

    Full text link
    Blockchain Sharding is a blockchain performance enhancement approach. By splitting a blockchain into several parallel-run committees (shards), it helps increase transaction throughput, reduce computational resources required, and increase reward expectation for participants. Recently, several flexible sharding methods that can tolerate up to n/2n/2 Byzantine nodes (n/2n/2 security level) have been proposed. However, these methods suffer from three main drawbacks. First, in a non-sharding blockchain, nodes can have different weight (power or stake) to create a consensus, and as such an adversary needs to control half of the overall weight in order to manipulate the system (p/2p/2 security level). In blockchain sharding, all nodes carry the same weight. Thus, it is only under the assumption that honest participants create as many nodes as they should that a n/2n/2 security level blockchain sharding reaches the p/2p/2 security level. Second, when some nodes leave the system, other nodes need to be reassigned, frequently, from shard to shard in order to maintain the security level. This has an adverse effect on system performance. Third, while some n/2n/2 approaches can maintain data integrity with up to n/2n/2 Byzantine nodes, their systems can halt with a smaller number of Byzantine nodes. In this paper, we present a p/2p/2 security level blockchain sharding approach that does not require honest participants to create multiple nodes, requires less node reassignment when some nodes leave the system, and can prevent the system from halting. Our experiments show that our new approach outperforms existing blockchain sharding approaches in terms of security, transaction throughput and flexibility

    MWPoW+: a strong consensus protocol for intra-shard consensus in blockchain sharding

    Get PDF
    Blockchain sharding splits a blockchain into several shards where consensus is reached at the shard level rather than over the entire blockchain. It improves transaction throughput and reduces the computational resources required of individual nodes. But a derivation of trustworthy consensus within a shard becomes an issue as the longest-chain based mechanisms used in conventional blockchains can no longer be used. Instead, a vote-based consensus mechanism must be employed. However, existing vote-based Byzantine false tolerance consensus protocols do not offer sufficient security guarantees for sharded blockchains. First, when used to support consensus where only one block is allowed at a time (binary consensus), these protocols are susceptible to progress-hindering attacks, i.e., unable to reach a consensus. Second, when used to support a stronger type of consensus where multiple concurrent blocks are allowed (strong consensus), their tolerance of adversary nodes is low. This paper proposes a new consensus protocol to address all these issues. We call the new protocol MWPoW+ as its basic framework is based on the existing Multiple Winner Proof of Work (MWPoW) protocol but includes new mechanisms to address the issues mentioned above. MWPoW+ is a vote-based protocol for strong consensus, asynchronous in consensus derivation but synchronous in communication. We prove that it can tolerate up to f < n/2 adversary nodes in a shard using a binary consensus protocol, and does not suffer from progress-hindering attacks

    Incentive Mechanism for Uncertain Tasks under Differential Privacy

    Full text link
    Mobile crowd sensing (MCS) has emerged as an increasingly popular sensing paradigm due to its cost-effectiveness. This approach relies on platforms to outsource tasks to participating workers when prompted by task publishers. Although incentive mechanisms have been devised to foster widespread participation in MCS, most of them focus only on static tasks (i.e., tasks for which the timing and type are known in advance) and do not protect the privacy of worker bids. In a dynamic and resource-constrained environment, tasks are often uncertain (i.e., the platform lacks a priori knowledge about the tasks) and worker bids may be vulnerable to inference attacks. This paper presents HERALD*, an incentive mechanism that addresses these issues through the use of uncertainty and hidden bids. Theoretical analysis reveals that HERALD* satisfies a range of critical criteria, including truthfulness, individual rationality, differential privacy, low computational complexity, and low social cost. These properties are then corroborated through a series of evaluations

    A Two-Layer Blockchain Sharding Protocol Leveraging Safety and Liveness for Enhanced Performance

    Get PDF
    Sharding is a critical technique that enhances the scalability of blockchain technology. However, existing protocols often assume adversarial nodes in a general term without considering the different types of attacks, which limits transaction throughput at runtime because attacks on liveness could be mitigated. There have been attempts to increase transaction throughput by separately handling the attacks; however, they have security vulnerabilities. This paper introduces Reticulum, a novel sharding protocol that overcomes these limitations and achieves enhanced scalability in a blockchain network without security vulnerabilities. Reticulum employs a two-phase design that dynamically adjusts transaction throughput based on runtime adversarial attacks on either or both liveness and safety. It consists of `control\u27 and `process\u27 shards in two layers corresponding to the two phases. Process shards are subsets of control shards, with each process shard expected to contain at least one honest node with high confidence. Conversely, control shards are expected to have a majority of honest nodes with high confidence. Reticulum leverages unanimous voting in the first phase to involve fewer nodes in accepting/rejecting a block, allowing more parallel process shards. The control shard finalizes the decision made in the first phase and serves as a lifeline to resolve disputes when they surface. Experiments demonstrate that the unique design of Reticulum empowers high transaction throughput and robustness in the face of different types of attacks in the network, making it superior to existing sharding protocols for blockchain networks

    On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

    Get PDF
    Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.</jats:p

    Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier

    Get PDF
    This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.</jats:p
    corecore