25,242 research outputs found

    Modeling Computational Security in Long-Lived Systems, Version 2

    Get PDF
    For many cryptographic protocols, security relies on the assumption that adversarial entities have limited computational power. This type of security degrades progressively over the lifetime of a protocol. However, some cryptographic services, such as timestamping services or digital archives, are long-lived in nature; they are expected to be secure and operational for a very long time (i.e., super-polynomial). In such cases, security cannot be guaranteed in the traditional sense: a computationally secure protocol may become insecure if the attacker has a super-polynomial number of interactions with the protocol. This paper proposes a new paradigm for the analysis of long-lived security protocols. We allow entities to be active for a potentially unbounded amount of real time, provided they perform only a polynomial amount of work per unit of real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded. We propose a new notion of long-term implementation, which is an adaptation of computational indistinguishability to the long-lived setting. We show that long-term implementation is preserved under polynomial parallel composition and exponential sequential composition. We illustrate the use of this new paradigm by analyzing some security properties of the long-lived timestamping protocol of Haber and Kamat

    Modeling Computational Security in Long-Lived Systems

    Get PDF
    For many cryptographic protocols, security relies on the assumption that adversarial entities have limited computational power. This type of security degrades progressively over the lifetime of a protocol. However, some cryptographic services, such as timestamping services or digital archives, are long-lived in nature; they are expected to be secure and operational for a very long time (i.e., super-polynomial). In such cases, security cannot be guaranteed in the traditional sense: a computationally secure protocol may become insecure if the attacker has a super-polynomial number of interactions with the protocol. This paper proposes a new paradigm for the analysis of long-lived security protocols. We allow entities to be active for a potentially unbounded amount of real time, provided they perform only a polynomial amount of work per unit of real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded. We propose a new notion of long-term implementation, which is an adaptation of computational indistinguishability to the long-lived setting. We show that long-term implementation is preserved under polynomial parallel composition and exponential sequential composition. We illustrate the use of this new paradigm by analyzing some security properties of the long-lived timestamping protocol of Haber and Kamat

    The Living Application: a Self-Organising System for Complex Grid Tasks

    Full text link
    We present the living application, a method to autonomously manage applications on the grid. During its execution on the grid, the living application makes choices on the resources to use in order to complete its tasks. These choices can be based on the internal state, or on autonomously acquired knowledge from external sensors. By giving limited user capabilities to a living application, the living application is able to port itself from one resource topology to another. The application performs these actions at run-time without depending on users or external workflow tools. We demonstrate this new concept in a special case of a living application: the living simulation. Today, many simulations require a wide range of numerical solvers and run most efficiently if specialized nodes are matched to the solvers. The idea of the living simulation is that it decides itself which grid machines to use based on the numerical solver currently in use. In this paper we apply the living simulation to modelling the collision between two galaxies in a test setup with two specialized computers. This simulation switces at run-time between a GPU-enabled computer in the Netherlands and a GRAPE-enabled machine that resides in the United States, using an oct-tree N-body code whenever it runs in the Netherlands and a direct N-body solver in the United States.Comment: 26 pages, 3 figures, accepted by IJHPC

    Context for goal-level product line derivation

    Get PDF
    Product line engineering aims at developing a family of products and facilitating the derivation of product variants from it. Context can be a main factor in determining what products to derive. Yet, there is gap in incorporating context with variability models. We advocate that, in the first place, variability originates from human intentions and choices even before software systems are constructed, and context influences variability at this intentional level before the functional one. Thus, we propose to analyze variability at an early phase of analysis adopting the intentional ontology of goal models, and studying how context can influence such variability. Below we present a classification of variation points on goal models, analyze their relation with context, and show the process of constructing and maintaining the models. Our approach is illustrated with an example of a smarthome for people with dementia problems. 1

    Homo Datumicus : correcting the market for identity data

    Get PDF
    Effective digital identity systems offer great economic and civic potential. However, unlocking this potential requires dealing with social, behavioural, and structural challenges to efficient market formation. We propose that a marketplace for identity data can be more efficiently formed with an infrastructure that provides a more adequate representation of individuals online. This paper therefore introduces the ontological concept of Homo Datumicus: individuals as data subjects transformed by HAT Microservers, with the axiomatic computational capabilities to transact with their own data at scale. Adoption of this paradigm would lower the social risks of identity orientation, enable privacy preserving transactions by default and mitigate the risks of power imbalances in digital identity systems and markets

    PresenceSense: Zero-training Algorithm for Individual Presence Detection based on Power Monitoring

    Full text link
    Non-intrusive presence detection of individuals in commercial buildings is much easier to implement than intrusive methods such as passive infrared, acoustic sensors, and camera. Individual power consumption, while providing useful feedback and motivation for energy saving, can be used as a valuable source for presence detection. We conduct pilot experiments in an office setting to collect individual presence data by ultrasonic sensors, acceleration sensors, and WiFi access points, in addition to the individual power monitoring data. PresenceSense (PS), a semi-supervised learning algorithm based on power measurement that trains itself with only unlabeled data, is proposed, analyzed and evaluated in the study. Without any labeling efforts, which are usually tedious and time consuming, PresenceSense outperforms popular models whose parameters are optimized over a large training set. The results are interpreted and potential applications of PresenceSense on other data sources are discussed. The significance of this study attaches to space security, occupancy behavior modeling, and energy saving of plug loads.Comment: BuildSys 201
    • …
    corecore