149 research outputs found

    HiTrust: building cross-organizational trust relationship based on a hybrid negotiation tree

    Get PDF
    Small-world phenomena have been observed in existing peer-to-peer (P2P) networks which has proved useful in the design of P2P file-sharing systems. Most studies of constructing small world behaviours on P2P are based on the concept of clustering peer nodes into groups, communities, or clusters. However, managing additional multilayer topology increases maintenance overhead, especially in highly dynamic environments. In this paper, we present Social-like P2P systems (Social-P2Ps) for object discovery by self-managing P2P topology with human tactics in social networks. In Social-P2Ps, queries are routed intelligently even with limited cached knowledge and node connections. Unlike community-based P2P file-sharing systems, we do not intend to create and maintain peer groups or communities consciously. In contrast, each node connects to other peer nodes with the same interests spontaneously by the result of daily searches

    CamFlow: Managed Data-sharing for Cloud Services

    Full text link
    A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government provides many related services for its citizens on a common platform. Similar considerations apply to the end-users of applications. But in particular, the incorporation of cloud services within `Internet of Things' architectures is driving the requirements for both protection and cross-application data sharing. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows; a crucial issue once data has left its owner's control by cloud-hosted applications and within cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-to-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners' dataflow policy with regard to protection and sharing, as well as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency, giving configurable system-wide visibility over data flows. [...]Comment: 14 pages, 8 figure

    An investigation of issues of privacy, anonymity and multi-factor authentication in an open environment

    Get PDF
    This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model.Adobe Acrobat Pro 9.5.1Adobe Acrobat 9.51 Paper Capture Plug-i

    Automatic Transformation-Based Model Checking of Multi-agent Systems

    Get PDF
    Multi-Agent Systems (MASs) are highly useful constructs in the context of real-world software applications. Built upon communication and interaction between autonomous agents, these systems are suitable to model and implement intelligent applications. Yet these desirable features are precisely what makes these systems very challenging to design, and their compliance with requirements extremely difficult to verify. This explains the need for the development of techniques and tools to model, understand, and implement interacting MASs. Among the different methods developed, the design-time verification techniques for MASs based on model checking offer the advantage of being formal and fully automated. We can distinguish between two different approaches used in model checking MASs, the direct verification approach, and the transformation-based approach. This thesis focuses on the later that relies on formal reduction techniques to transform the problem of model checking a source logic into that of an equivalent problem of model checking a target logic. In this thesis, we propose a new transformation framework leveraging the model checking of the computation tree logic (CTL) and its NuSMV model checker to design and implement the process of transformation-based model checking for CTL-extension logics to MASs. The approach provides an integrated system with a rich set of features, designed to support the transformation process while simplifying the most challenging and error-prone tasks. The thesis presents and describes the tool built upon this framework and its different applications. A performance comparison with MCMAS, the model checker of MASs, is also discussed

    Designing Trustworthy Autonomous Systems

    Get PDF
    The design of autonomous systems is challenging and ensuring their trustworthiness can have different meanings, such as i) ensuring consistency and completeness of the requirements by a correct elicitation and formalization process; ii) ensuring that requirements are correctly mapped to system implementations so that any system behaviors never violate its requirements; iii) maximizing the reuse of available components and subsystems in order to cope with the design complexity; and iv) ensuring correct coordination of the system with its environment.Several techniques have been proposed over the years to cope with specific problems. However, a holistic design framework that, leveraging on existing tools and methodologies, practically helps the analysis and design of autonomous systems is still missing. This thesis explores the problem of building trustworthy autonomous systems from different angles. We have analyzed how current approaches of formal verification can provide assurances: 1) to the requirement corpora itself by formalizing requirements with assume/guarantee contracts to detect incompleteness and conflicts; 2) to the reward function used to then train the system so that the requirements do not get misinterpreted; 3) to the execution of the system by run-time monitoring and enforcing certain invariants; 4) to the coordination of the system with other external entities in a system of system scenario and 5) to system behaviors by automatically synthesize a policy which is correct

    Generic Methods for Adaptive Management of Service Level Agreements in Cloud Computing

    Get PDF
    The adoption of cloud computing to build and deliver application services has been nothing less than phenomenal. Service oriented systems are being built using disparate sources composed of web services, replicable datastores, messaging, monitoring and analytics functions and more. Clouds augment these systems with advanced features such as high availability, customer affinity and autoscaling on a fair pay-per-use cost model. The challenge lies in using the utility paradigm of cloud beyond its current exploit. Major trends show that multi-domain synergies are creating added-value service propositions. This raises two questions on autonomic behaviors, which are specifically ad- dressed by this thesis. The first question deals with mechanism design that brings the customer and provider(s) together in the procurement process. The purpose is that considering customer requirements for quality of service and other non functional properties, service dependencies need to be efficiently resolved and legally stipulated. The second question deals with effective management of cloud infrastructures such that commitments to customers are fulfilled and the infrastructure is optimally operated in accordance with provider policies. This thesis finds motivation in Service Level Agreements (SLAs) to answer these questions. The role of SLAs is explored as instruments to build and maintain trust in an economy where services are increasingly interdependent. The thesis takes a wholesome approach and develops generic methods to automate SLA lifecycle management, by identifying and solving relevant research problems. The methods afford adaptiveness in changing business landscape and can be localized through policy based controls. A thematic vision that emerges from this work is that business models, services and the delivery technology are in- dependent concepts that can be finely knitted together by SLAs. Experimental evaluations support the message of this thesis, that exploiting SLAs as foundations for market innovation and infrastructure governance indeed holds win-win opportunities for both cloud customers and cloud providers

    A Low-Energy Security Solution for IoT-Based Smart Farms

    Get PDF
    This work proposes a novel configuration of the Transport Layer Security protocol (TLS), suitable for low energy Internet of Things (IoT), applications. The motivation behind the redesign of TLS is energy consumption minimisation and sustainable farming, as exemplified by an application domain of aquaponic smart farms. The work therefore considers decentralisation of a formerly centralised security model, with a focus on reducing energy consumption for battery powered devices. The research presents a four-part investigation into the security solution, composed of a risk assessment, energy analysis of authentication and data exchange functions, and finally the design and verification of a novel consensus authorisation mechanism. The first investigation considered traditional risk-driven threat assessment, but to include energy reduction, working towards device longevity within a content-oriented framework. Since the aquaponics environments include limited but specific data exchanges, a content-oriented approach produced valuable insights into security and privacy requirements that would later be tested by implementing a variety of mechanisms available on the ESP32. The second and third investigations featured the energy analysis of authentication and data exchange functions respectively, where the results of the risk assessment were implemented to compare the re-configurations of TLS mechanisms and domain content. Results concluded that selective confidentiality and persistent secure sessions between paired devices enabled considerable improvements for energy consumptions, and were a good reflection of the possibilities suggested by the risk assessment. The fourth and final investigation proposed a granular authorisation design to increase the safety of access control that would otherwise be binary in TLS. The motivation was for damage mitigation from inside attacks or network faults. The approach involved an automated, hierarchy-based, decentralised network topology to reduce data duplication whilst still providing robustness beyond the vulnerability of central governance. Formal verification using model-checking indicated a safe design model, using four automated back-ends. The research concludes that lower energy IoT solutions for the smart farm application domain are possible

    Stronger secrecy for network-facing applications through privilege reduction

    Get PDF
    Despite significant effort in improving software quality, vulnerabilities and bugs persist in applications. Attackers remotely exploit vulnerabilities in network-facing applications and then disclose and corrupt users' sensitive information that these applications process. Reducing privilege of application components helps to limit the harm that an attacker may cause if she exploits an application. Privilege reduction, i.e., the Principle of Least Privilege, is a fundamental technique that allows one to contain possible exploits of error-prone software components: it entails granting a software component the minimal privilege that it needs to operate. Applying this principle ensures that sensitive data is given only to those software components that indeed require processing such data. This thesis explores how to reduce the privilege of network-facing applications to provide stronger confidentiality and integrity guarantees for sensitive data. First, we look into applying privilege reduction to cryptographic protocol implementations. We address the vital and largely unexamined problem of how to structure implementations of cryptographic protocols to protect sensitive data even in the case when an attacker compromises untrusted components of a protocol implementation. As evidence that the problem is poorly understood, we identified two attacks which succeed in disclosing of sensitive data in two state-of-the-art, exploit-resistant cryptographic protocol implementations: the privilege-separated OpenSSH server and the HiStar/DStar DIFC-based SSL web server. We propose practical, general, system-independent principles for structuring protocol implementations to defend against these two attacks. We apply our principles to protect sensitive data from disclosure in the implementations of both the server and client sides of OpenSSH and of the OpenSSL library. Next, we explore how to reduce the privilege of language runtimes, e.g., the JavaScript language runtime, so as to minimize the risk of their compromise, and thus of the disclosure and corruption of sensitive information. Modern language runtimes are complex software involving such advanced techniques as just-in-time compilation, native-code support routines, garbage collection, and dynamic runtime optimizations. This complexity makes it hard to guarantee the safety of language runtimes, as evidenced by the frequency of the discovery of vulnerabilities in them. We provide new mechanisms that allow sandboxing language runtimes using Software-based Fault Isolation (SFI). In particular, we enable sandboxing of runtime code modification, which modern language runtimes depend on heavily for achieving high performance. We have applied our sandboxing techniques to the V8 Javascript engine on both the x86-32 and x86-64 architectures, and found that the techniques incur only moderate performance overhead. Finally, we apply privilege reduction within the web browser to secure sensitive data within web applications. Web browsers have become an attractive target for attackers because of their widespread use. There are two principal threats to a user's sensitive data in the browser environment: untrusted third-party extensions and untrusted web pages. Extensions execute with elevated privilege which allows them to read content within all web applications. Thus, a malicious extension author may write extension code that reads sensitive page content and sends it to a remote server he controls. Alternatively, a malicious page author may exploit an honest but buggy extension, thus leveraging its elevated privilege to disclose sensitive information from other origins. We propose enforcing privilege reduction policies on extension JavaScript code to protect web applications' sensitive data from malicious extensions and malicious pages. We designed ScriptPolice, a policy system for the Chrome browser's V8 JavaScript language runtime, to enforce flexible security policies on JavaScript execution. We restrict the privileges of a variety of extensions and contain any malicious activity whether introduced by design or injected by a malicious page. The overhead ScriptPolice incurs on extension execution is acceptable: the added page load latency caused by ScriptPolice is so short as to be virtually indistinguishable by users

    Runtime Quantitative Verification of Self-Adaptive Systems

    Get PDF
    Software systems used in mission- and business-critical applications in domains including defence, healthcare, and finance must comply with strict dependability, performance, and other Quality-of-Service (QoS) requirements. Self-adaptive systems achieve this compliance under changing environmental conditions, evolving requirements and system failures by using closed-loop control to modify their behaviour and structure in response to these events. Runtime quantitative verification (RQV) is a mathematically-based approach that implements the closed-loop control of self-adaptive systems. Using runtime observations of a system and its environment, RQV updates stochastic models whose formal analysis underpins the adaptation decisions made within the control loop. The approach can identify and, under certain conditions, predict violation of QoS requirements, and can drive self-adaptation in ways guaranteed to restore or maintain compliance with these requirements. Despite its merits, RQV has significant computation and memory overheads, which restrict its applicability to small systems and to adaptations affecting only the configuration parameters of the system. In this thesis, we introduce RQV variants that improve the efficiency and scalability of the approach and extend its applicability to larger and more complex self-adaptive software systems, and to adaptations that modify the structure of a system. First, we integrate RQV with established efficiency improvement techniques from other software engineering areas. We use caching of recent analysis results, limited lookahead to precompute suitable adaptations for potential future changes, and nearly-optimal reconfiguration to eliminate the need for an exhaustive analysis of the entire reconfiguration space. Second, we introduce an RQV variant that incorporates evolutionary algorithms into the RQV process facilitating the efficient search through large reconfiguration spaces and enabling adaptations that include structural changes. Third, we propose an RQV-driven approach that decentralises the control loops in distributed self-adaptive systems. Finally, we devise an RQV-based methodology for the engineering of trustworthy self-adaptive systems. We evaluate the proposed RQV variants using prototype self-adaptive systems from several application domains, including an embedded system for unmanned underwater vehicles and a foreign exchange service-based system. Our results, subject to the adaptation scenarios used in the evaluation, demonstrate the effectiveness and generality of the new RQV variants
    • …
    corecore