8,121 research outputs found

    Securing open multi-agent systems governed by electronic institutions

    Get PDF
    One way to build large-scale autonomous systems is to develop an open multi-agent system using peer-to-peer architectures in which agents are not pre-engineered to work together and in which agents themselves determine the social norms that govern collective behaviour. The social norms and the agent interaction models can be described by Electronic Institutions such as those expressed in the Lightweight Coordination Calculus (LCC), a compact executable specification language based on logic programming and pi-calculus. Open multi-agent systems have experienced growing popularity in the multi-agent community and are expected to have many applications in the near future as large scale distributed systems become more widespread, e.g. in emergency response, electronic commerce and cloud computing. A major practical limitation to such systems is security, because the very openness of such systems opens the doors to adversaries for exploit existing vulnerabilities. This thesis addresses the security of open multi-agent systems governed by electronic institutions. First, the main forms of attack on open multi-agent systems are introduced and classified in the proposed attack taxonomy. Then, various security techniques from the literature are surveyed and analysed. These techniques are categorised as either prevention or detection approaches. Appropriate countermeasures to each class of attack are also suggested. A fundamental limitation of conventional security mechanisms (e.g. access control and encryption) is the inability to prevent information from being propagated. Focusing on information leakage in choreography systems using LCC, we then suggest two frameworks to detect insecure information flows: conceptual modeling of interaction models and language-based information flow analysis. A novel security-typed LCC language is proposed to address the latter approach. Both static (design-time) and dynamic (run-time) security type checking are employed to guarantee no information leakage can occur in annotated LCC interaction models. The proposed security type system is then formally evaluated by proving its properties. A limitation of both conceptual modeling and language-based frameworks is difficulty of formalising realistic policies using annotations. Finally, the proposed security-typed LCC is applied to a cloud computing configuration case study, in which virtual machine migration is managed. The secrecy of LCC interaction models for virtual machine management is analysed and information leaks are discussed

    Improving the Security of Critical Infrastructure: Metrics, Measurements, and Analysis

    Get PDF
    In this work, we propose three important contributions needed in the process of improving the security of the critical infrastructure: metrics, measurement, and analysis. To improve security, metrics are key to ensuring the accuracy of the assessment and evaluation. Measurements are the core of the process of identifying the causality and effectiveness of various behaviors, and accurate measurement with the right assumptions is a cornerstone for accurate analysis. Finally, contextualized analysis essential for understanding measurements. Different results can be derived for the same data according to the analysis method, and it can serve as a basis for understanding and improving systems security. In this dissertation, we look at whether these key concepts are well demonstrated in existing (networked) systems and research products. In the first thrust, we verified the validity of volume-based contribution evaluation metrics used in threat information sharing systems. Further, we proposed a qualitative evaluation as an alternative to supplement the shortcomings of the volume-based evaluation method. In the second thrust, we measured the effectiveness of the low-rate DDoS attacks in a realistic environment to highlight the importance of establishing assumptions grounded in reality for measurements. Moreover, we theoretically analyzed the low-rate DDoS attacks and conducted additional experiments to validate them. In the last thrust, we conducted a large-scale measurement and analyzed the behaviors of open resolvers, to estimate the potential threats of them. We then went beyond just figuring out the number of open resolvers and explored new implications that the behavioral analysis could provide. We also experimentally shown the existence of forwarding resolvers and their behavior by precisely analyzing DNS resolution packets

    Towards Scalable Network Traffic Measurement With Sketches

    Get PDF
    Driven by the ever-increasing data volume through the Internet, the per-port speed of network devices reached 400 Gbps, and high-end switches are capable of processing 25.6 Tbps of network traffic. To improve the efficiency and security of the network, network traffic measurement becomes more important than ever. For fast and accurate traffic measurement, managing an accurate working set of active flows (WSAF) at line rates is a key challenge. WSAF is usually located in high-speed but expensive memories, such as TCAM or SRAM, and thus their capacity is quite limited. To scale up the per-flow measurement, we pursue three thrusts. In the first thrust, we propose to use In-DRAM WSAF and put a compact data structure (i.e., sketch) called FlowRegulator before WSAF to compensate for DRAM\u27s slow access time. Per our results, FlowRegulator can substantially reduce massive influxes to WSAF without compromising measurement accuracy. In the second thrust, we integrate our sketch into a network system and propose an SDN-based WLAN monitoring and management framework called RFlow+, which can overcome the limitations of existing traffic measurement solutions (e.g., OpenFlow and sFlow), such as a limited view, incomplete flow statistics, and poor trade-off between measurement accuracy and CPU/network overheads. In the third thrust, we introduce a novel sampling scheme to deal with the poor trade-off that is provided by the standard simple random sampling (SRS). Even though SRS has been widely used in practice because of its simplicity, it provides non-uniform sampling rates for different flows, because it samples packets over an aggregated data flow. Starting with a simple idea that independent per-flow packet sampling provides the most accurate estimation of each flow, we introduce a new concept of per-flow systematic sampling, aiming to provide the same sampling rate across all flows. In addition, we provide a concrete sampling method called SketchFlow, which approximates the idea of the per-flow systematic sampling using a sketch saturation event

    SSH Key Management Challenges and Requirements

    Get PDF
    Invited paperSSH (Secure Shell) uses public keys for authenticating servers and users. This paper summarizes progress in SSH key management so far, highlights outstanding problems, and presents requirements for a long-term solution. Proposals are solicited from the research community to address the issue. The problem is of high practical importance, as most of our critical Internet infrastructure, cloud services, and open source software development is protected using these keys.Non peer reviewe

    Classifying network attack scenarios using an ontology

    Get PDF
    This paper presents a methodology using network attack ontology to classify computer-based attacks. Computer network attacks differ in motivation, execution and end result. Because attacks are diverse, no standard classification exists. If an attack could be classified, it could be mitigated accordingly. A taxonomy of computer network attacks forms the basis of the ontology. Most published taxonomies present an attack from either the attacker's or defender's point of view. This taxonomy presents both views. The main taxonomy classes are: Actor, Actor Location, Aggressor, Attack Goal, Attack Mechanism, Attack Scenario, Automation Level, Effects, Motivation, Phase, Scope and Target. The "Actor" class is the entity executing the attack. The "Actor Location" class is the Actor‟s country of origin. The "Aggressor" class is the group instigating an attack. The "Attack Goal" class specifies the attacker‟s goal. The "Attack Mechanism" class defines the attack methodology. The "Automation Level" class indicates the level of human interaction. The "Effects" class describes the consequences of an attack. The "Motivation" class specifies incentives for an attack. The "Scope" class describes the size and utility of the target. The "Target" class is the physical device or entity targeted by an attack. The "Vulnerability" class describes a target vulnerability used by the attacker. The "Phase" class represents an attack model that subdivides an attack into different phases. The ontology was developed using an "Attack Scenario" class, which draws from other classes and can be used to characterize and classify computer network attacks. An "Attack Scenario" consists of phases, has a scope and is attributed to an actor and aggressor which have a goal. The "Attack Scenario" thus represents different classes of attacks. High profile computer network attacks such as Stuxnet and the Estonia attacks can now be been classified through the “Attack Scenario” class

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    This volume has been created as a continuation of the previous one, with the aim of outlining a set of focus areas and actions that the Italian Nation research community considers essential. The book touches many aspects of cyber security, ranging from the definition of the infrastructure and controls needed to organize cyberdefence to the actions and technologies to be developed to be better protected, from the identification of the main technologies to be defended to the proposal of a set of horizontal actions for training, awareness raising, and risk management

    The Value of User-Visible Internet Cryptography

    Full text link
    Cryptographic mechanisms are used in a wide range of applications, including email clients, web browsers, document and asset management systems, where typical users are not cryptography experts. A number of empirical studies have demonstrated that explicit, user-visible cryptographic mechanisms are not widely used by non-expert users, and as a result arguments have been made that cryptographic mechanisms need to be better hidden or embedded in end-user processes and tools. Other mechanisms, such as HTTPS, have cryptography built-in and only become visible to the user when a dialogue appears due to a (potential) problem. This paper surveys deployed and potential technologies in use, examines the social and legal context of broad classes of users, and from there, assesses the value and issues for those users
    • 

    corecore