6,163 research outputs found

    Discovering, quantifying, and displaying attacks

    Full text link
    In the design of software and cyber-physical systems, security is often perceived as a qualitative need, but can only be attained quantitatively. Especially when distributed components are involved, it is hard to predict and confront all possible attacks. A main challenge in the development of complex systems is therefore to discover attacks, quantify them to comprehend their likelihood, and communicate them to non-experts for facilitating the decision process. To address this three-sided challenge we propose a protection analysis over the Quality Calculus that (i) computes all the sets of data required by an attacker to reach a given location in a system, (ii) determines the cheapest set of such attacks for a given notion of cost, and (iii) derives an attack tree that displays the attacks graphically. The protection analysis is first developed in a qualitative setting, and then extended to quantitative settings following an approach applicable to a great many contexts. The quantitative formulation is implemented as an optimisation problem encoded into Satisfiability Modulo Theories, allowing us to deal with complex cost structures. The usefulness of the framework is demonstrated on a national-scale authentication system, studied through a Java implementation of the framework.Comment: LMCS SPECIAL ISSUE FORTE 201

    MetTeL: A Generic Tableau Prover.

    Get PDF

    Protocol Requirements for Self-organizing Artifacts: Towards an Ambient Intelligence

    Full text link
    We discuss which properties common-use artifacts should have to collaborate without human intervention. We conceive how devices, such as mobile phones, PDAs, and home appliances, could be seamlessly integrated to provide an "ambient intelligence" that responds to the user's desires without requiring explicit programming or commands. While the hardware and software technology to build such systems already exists, as yet there is no standard protocol that can learn new meanings. We propose the first steps in the development of such a protocol, which would need to be adaptive, extensible, and open to the community, while promoting self-organization. We argue that devices, interacting through "game-like" moves, can learn to agree about how to communicate, with whom to cooperate, and how to delegate and coordinate specialized tasks. Thus, they may evolve a distributed cognition or collective intelligence capable of tackling complex tasks.Comment: To be presented at 5th International Conference on Complex System

    A Survey on Automation Challenges and Opportunities for IoT based Agriculture

    Get PDF
    Agriculture automation is a major concern and a contentious issue in every country. This study provides a comprehensive assessment of the obstacles and potential associated with automating agricultural practises using IoT (Internet of Things) technology. It begins with an introduction that highlights the prior work and discusses the proposed proposal, which is centred on IoT and machine learning applications and breakthroughs in irrigation systems. The report digs into several IoT applications in agriculture, including crop and soil management, drone field surveillance, cattle and resource management, and pesticide/fertilizer tracking. It delves into the breakthroughs made possible by IoT and machine learning, particularly in smart irrigation systems, livestock monitoring, drone technology, precision agriculture, and integrated pest management. The paper thoroughly examines the challenges associated with automating irrigation practises, such as interoperability, data storage, connectivity, hardware and software maintenance, security concerns, data collection, environmental variability, cost, infrastructure, privacy, and adoption by small-scale farmers. The survey finishes by synthesising the important findings and emphasising the crucial need of overcoming these problems in order to successfully adopt IoT-driven agriculture automation

    2011 Strategic roadmap for Australian research infrastructure

    Get PDF
    The 2011 Roadmap articulates the priority research infrastructure areas of a national scale (capability areas) to further develop Australia’s research capacity and improve innovation and research outcomes over the next five to ten years. The capability areas have been identified through considered analysis of input provided by stakeholders, in conjunction with specialist advice from Expert Working Groups   It is intended the Strategic Framework will provide a high-level policy framework, which will include principles to guide the development of policy advice and the design of programs related to the funding of research infrastructure by the Australian Government. Roadmapping has been identified in the Strategic Framework Discussion Paper as the most appropriate prioritisation mechanism for national, collaborative research infrastructure. The strategic identification of Capability areas through a consultative roadmapping process was also validated in the report of the 2010 NCRIS Evaluation. The 2011 Roadmap is primarily concerned with medium to large-scale research infrastructure. However, any landmark infrastructure (typically involving an investment in excess of $100 million over five years from the Australian Government) requirements identified in this process will be noted. NRIC has also developed a ‘Process to identify and prioritise Australian Government landmark research infrastructure investments’ which is currently under consideration by the government as part of broader deliberations relating to research infrastructure. NRIC will have strategic oversight of the development of the 2011 Roadmap as part of its overall policy view of research infrastructure

    Hamming codification for safety critical communications

    Get PDF
    The Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN) is one of the largest particles accelerators in the world. Due to the complexity when colliding these particles at high energy and the high cost of failure (in both financially and efficiency aspects), a Machine Protection System (MPS) is requiered, monitoring (CERN) high energy accelerators and protecting all parts of the accelerator when there is beam presence. This backbone of the MPS relies on a Beam Interlock System (BIS). The BIS transmits the request from any equipment system in the MPS to either a Beam Dumping System (BDS) (to eject safely the particles from the accelerator) or inhibits the injection of the beam to the accelerator. Thus, the communications between BIS elements are declared as safety critical communications. The purpose of this thesis is to explore different communication protocols that could be used to send and receive data between the systems that compose the BIS. The current method used is Manchester modulation, which encodes data achieving a zero overall DC bias. Besides its simplicity, as this method incorporates clock-matching between the trans- mitter and receiver devices within the data stream itself, the bit rate is essentially halved, limiting the protocol itself. This thesis compares and contrasts the current approach with Hamming codification. Results show that the data can be encoded with a similar resources efficiency without the necessity of clock matching at the receiver, as well as a zero overall DC bias. The conclusion guides then to an improvement of transmission length and reliability confidence between these elements

    Designing and prototyping WebRTC and IMS integration using open source tools

    Get PDF
    WebRTC, or Web Real-time Communications, is a collection of web standards that detail the mechanisms, architectures and protocols that work together to deliver real-time multimedia services to the web browser. It represents a significant shift from the historical approach of using browser plugins, which over time, have proven cumbersome and problematic. Furthermore, it adopts various Internet standards in areas such as identity management, peer-to-peer connectivity, data exchange and media encoding, to provide a system that is truly open and interoperable. Given that WebRTC enables the delivery of multimedia content to any Internet Protocol (IP)-enabled device capable of hosting a web browser, this technology could potentially be used and deployed over millions of smartphones, tablets and personal computers worldwide. This service and device convergence remains an important goal of telecommunication network operators who seek to enable it through a converged network that is based on the IP Multimedia Subsystem (IMS). IMS is an IP-based subsystem that sits at the core of a modern telecommunication network and acts as the main routing substrate for media services and applications such as those that WebRTC realises. The combination of WebRTC and IMS represents an attractive coupling, and as such, a protracted investigation could help to answer important questions around the technical challenges that are involved in their integration, and the merits of various design alternatives that present themselves. This thesis is the result of such an investigation and culminates in the presentation of a detailed architectural model that is validated with a prototypical implementation in an open source testbed. The model is built on six requirements which emerge from an analysis of the literature, including previous interventions in IMS networks and a key technical report on design alternatives. Furthermore, this thesis argues that the client architecture requires support for web-oriented signalling, identity and call handling techniques leading to a potential for IMS networks to natively support these techniques as operator networks continue to grow and develop. The proposed model advocates the use of SIP over WebSockets for signalling and DTLS-SRTP for media to enable one-to-one communication and can be extended through additional functions resulting in a modular architecture. The model was implemented using open source tools which were assembled to create an experimental network testbed, and tests were conducted demonstrating successful cross domain communications under various conditions. The thesis has a strong focus on enabling ordinary software developers to assemble a prototypical network such as the one that was assembled and aims to enable experimentation in application use cases for integrated environments

    Experimental quantum key distribution with source flaws

    Full text link
    Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be secure in practice. Here, we perform an experiment that for the first time shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments and our theory can be applied to general QKD systems. These features constitute a step towards secure QKD with imperfect devices.Comment: 12 pages, 4 figures, updated experiment and theor
    • …
    corecore