860 research outputs found

    An identity based routing path verification scheme for wireless sensor networks

    Get PDF

    Intrusion-Resilient Integrity in Data-Centric Unattended WSNs

    Get PDF
    Unattended Wireless Sensor Networks (UWSNs) operate in autonomous or disconnected mode: sensed data is collected periodically by an itinerant sink. Between successive sink visits, sensor-collected data is subject to some unique vulnerabilities. In particular, while the network is unattended, a mobile adversary (capable of subverting up to a fraction of sensors at a time) can migrate between compromised sets of sensors and inject fraudulent data. In this paper, we provide two collaborative authentication techniques that allow an UWSN to maintain integrity and authenticity of sensor data-in the presence of a mobile adversary-until the next sink visit. Proposed schemes use simple, standard, and inexpensive symmetric cryptographic primitives, coupled with key evolution and few message exchanges. We study their security and effectiveness, both analytically and via simulations. We also assess their robustness and show how to achieve the desired trade-off between performance and security

    Advanced Cryptographic Techniques for Protecting Log Data

    Get PDF
    This thesis examines cryptographic techniques providing security for computer log files. It focuses on ensuring authenticity and integrity, i.e. the properties of having been created by a specific entity and being unmodified. Confidentiality, the property of being unknown to unauthorized entities, will be considered, too, but with less emphasis. Computer log files are recordings of actions performed and events encountered in computer systems. While the complexity of computer systems is steadily growing, it is increasingly difficult to predict how a given system will behave under certain conditions, or to retrospectively reconstruct and explain which events and conditions led to a specific behavior. Computer log files help to mitigate the problem of retracing a system’s behavior retrospectively by providing a (usually chronological) view of events and actions encountered in a system. Authenticity and integrity of computer log files are widely recognized security requirements, see e.g. [Latham, ed., "Department of Defense Trusted Computer System Evaluation Criteria", 1985, p. 10], [Kent and Souppaya, "Guide to Computer Security Log Management", NIST Special Publication 800-92, 2006, Section 2.3.2], [Guttman and Roback, "An Introduction to Computer Security: The NIST Handbook", superseded NIST Special Publication 800-12, 1995, Section 18.3.1], [Nieles et al., "An Introduction to Information Security" , NIST Special Publication 800-12, 2017, Section 9.3], [Common Criteria Editorial Board, ed., "Common Criteria for Information Technology Security Evaluation", Part 2, Section 8.6]. Two commonly cited ways to ensure integrity of log files are to store log data on so-called write-once-read-many-times (WORM) drives and to immediately print log records on a continuous-feed printer. This guarantees that log data cannot be retroactively modified by an attacker without physical access to the storage medium. However, such special-purpose hardware may not always be a viable option for the application at hand, for example because it may be too costly. In such cases, the integrity and authenticity of log records must be ensured via other means, e.g. with cryptographic techniques. Although these techniques cannot prevent the modification of log data, they can offer strong guarantees that modifications will be detectable, while being implementable in software. Furthermore, cryptography can be used to achieve public verifiability of log files, which may be needed in applications that have strong transparency requirements. Cryptographic techniques can even be used in addition to hardware solutions, providing protection against attackers who do have physical access to the logging hardware, such as insiders. Cryptographic schemes for protecting stored log data need to be resilient against attackers who obtain control over the computer storing the log data. If this computer operates in a standalone fashion, it is an absolute requirement for the cryptographic schemes to offer security even in the event of a key compromise. As this is impossible with standard cryptographic tools, cryptographic solutions for protecting log data typically make use of forward-secure schemes, guaranteeing that changes to log data recorded in the past can be detected. Such schemes use a sequence of authentication keys instead of a single one, where previous keys cannot be computed efficiently from latter ones. This thesis considers the following requirements for, and desirable features of, cryptographic logging schemes: 1) security, i.e. the ability to reliably detect violations of integrity and authenticity, including detection of log truncations, 2) efficiency regarding both computational and storage overhead, 3) robustness, i.e. the ability to verify unmodified log entries even if others have been illicitly changed, and 4) verifiability of excerpts, including checking an excerpt for omissions. The goals of this thesis are to devise new techniques for the construction of cryptographic schemes that provide security for computer log files, to give concrete constructions of such schemes, to develop new models that can accurately capture the security guarantees offered by the new schemes, as well as to examine the security of previously published schemes. This thesis demands that cryptographic schemes for securely storing log data must be able to detect if log entries have been deleted from a log file. A special case of deletion is log truncation, where a continuous subsequence of log records from the end of the log file is deleted. Obtaining truncation resistance, i.e. the ability to detect truncations, is one of the major difficulties when designing cryptographic logging schemes. This thesis alleviates this problem by introducing a novel technique to detect log truncations without the help of third parties or designated logging hardware. Moreover, this work presents new formal security notions capturing truncation resistance. The technique mentioned above is applied to obtain cryptographic logging schemes which can be shown to satisfy these notions under mild assumptions, making them the first schemes with formally proven truncation security. Furthermore, this thesis develops a cryptographic scheme for the protection of log files which can support the creation of excerpts. For this thesis, an excerpt is a (not necessarily contiguous) subsequence of records from a log file. Excerpts created with the scheme presented in this thesis can be publicly checked for integrity and authenticity (as explained above) as well as for completeness, i.e. the property that no relevant log entry has been omitted from the excerpt. Excerpts provide a natural way to preserve the confidentiality of information that is contained in a log file, but not of interest for a specific public analysis of the log file, enabling the owner of the log file to meet confidentiality and transparency requirements at the same time. The scheme demonstrates and exemplifies the technique for obtaining truncation security mentioned above. Since cryptographic techniques to safeguard log files usually require authenticating log entries individually, some researchers [Ma and Tsudik, "A New Approach to Secure Logging", LNCS 5094, 2008; Ma and Tsudik, "A New Approach to Secure Logging", ACM TOS 2009; Yavuz and Peng, "BAF: An Efficient Publicly Verifiable Secure Audit Logging Scheme for Distributed Systems", ACSAC 2009] have proposed using aggregatable signatures [Boneh et al., "Aggregate and Verifiably Encrypted Signatures from Bilinear Maps", EUROCRYPT 2003] in order to reduce the overhead in storage space incurred by using such a cryptographic scheme. Aggregation of signatures refers to some “combination” of any number of signatures (for distinct or equal messages, by distinct or identical signers) into an “aggregate” signature. The size of the aggregate signature should be less than the total of the sizes of the orginal signatures, ideally the size of one of the original signatures. Using aggregation of signatures in applications that require storing or transmitting a large number of signatures (such as the storage of log records) can lead to significant reductions in the use of storage space and bandwidth. However, aggregating the signatures for all log records into a single signature will cause some fragility: The modification of a single log entry will render the aggregate signature invalid, preventing the cryptographic verification of any part of the log file. However, being able to distinguish manipulated log entries from non-manipulated ones may be of importance for after-the-fact investigations. This thesis addresses this issue by presenting a new technique providing a trade-off between storage overhead and robustness, i.e. the ability to tolerate some modifications to the log file while preserving the cryptographic verifiability of unmodified log entries. This robustness is achieved by the use of a special kind of aggregate signatures (called fault-tolerant aggregate signatures), which contain some redundancy. The construction makes use of combinatorial methods guaranteeing that if the number of errors is below a certain threshold, then there will be enough redundancy to identify and verify the non-modified log entries. Finally, this thesis presents a total of four attacks on three different schemes intended for securely storing log files presented in the literature [Yavuz et al., "Efficient, Compromise Resilient and Append-Only Cryptographic Schemes for Secure Audit Logging", Financial Cryptography 2012; Ma, "Practical Forward Secure Sequential Aggregate Signatures", ASIACCS 2008]. The attacks allow for virtually arbitrary log file forgeries or even recovery of the secret key used for authenticating the log file, which could then be used for mostly arbitrary log file forgeries, too. All of these attacks exploit weaknesses of the specific schemes. Three of the attacks presented here contradict the security properties of the schemes claimed and supposedly proven by the respective authors. This thesis briefly discusses these proofs and points out their flaws. The fourth attack presented here is outside of the security model considered by the scheme’s authors, but nonetheless presents a realistic threat. In summary, this thesis advances the scientific state-of-the-art with regard to providing security for computer log files in a number of ways: by introducing a new technique for obtaining security against log truncations, by providing the first scheme where excerpts from log files can be verified for completeness, by describing the first scheme that can achieve some notion of robustness while being able to aggregate log record signatures, and by analyzing the security of previously proposed schemes

    Smarter Data Availability Checks in the Cloud: Proof of Storage via Blockchain

    Get PDF
    Cloud computing offers clients flexible and cost-effective resources. Nevertheless, past incidents indicate that the cloud may misbehave by exposing or tampering with clients' data. Therefore, it is vital for clients to protect the confidentiality and integrity of their outsourced data. To address these issues, researchers proposed cryptographic protocols called “proof of storage” that let a client efficiently verify the integrity or availability of its data stored in a remote cloud server. However, in these schemes, the client either has to be online to perform the verification itself or has to delegate the verification to a fully trusted auditor. In this chapter, a new scheme is proposed that lets the client distribute its data replicas among multiple cloud servers to achieve high availability without the need for the client to be online for the verification and without a trusted auditor's involvement. The new scheme is mainly based on blockchain smart contracts. It illustrates how a combination of cloud computing and blockchain technology can resolve real-world problems

    Control designs and reinforcement learning-based management for software defined networks

    Get PDF
    In this thesis, we focus our investigations around the novel software defined net- working (SDN) paradigm. The central goal of SDN is to smoothly introduce centralised control capabilities to the otherwise distributed computer networks. This is achieved by abstracting and concentrating network control functionalities in a logically centralised control unit, which is referred to as the SDN controller. To further balance between centralised control, scalability and reliability considerations, distributed SDN is introduced to enable the coexistence of multiple physical SDN controllers. For distributed SDN, networking elements are grouped together to form various domains, with each domain managed by an SDN controller. In such a distributed SDN setting, SDN controllers of all domains synchronise with each other to maintain logically centralised network views, which is referred to as controller synchronisation. Centred on the problem of SDN controller synchronisation, this thesis specifically aims at addressing two aspects of the subject as follows. First, we model and analyse the performance enhancements brought by controller synchronisation in distributed SDN from a theoretical perspective. Second, we design intelligent controller synchronisation policies by leveraging existing and creating new Reinforcement Learning (RL) and Deep Learning (DL)-based approaches. In order to understand the performance gains of SDN controller synchronisation from a fundamental and analytical perspective, we propose a two-layer network model based on graphs to capture various characteristics of distributed SDN net- works. Then, we develop two families of analytical methods to investigate the performance of distributed SDN in relationship to network structure and the level of SDN controller synchronisation. The significance of our analytical results is that they can be used to quantify the contribution of controller synchronisation level, in improving the network performance under different network parameters. Therefore, they serve as fundamental guidelines for future SDN performance analyses and protocol designs. For the designs of SDN controller synchronisation policies, most existing works focus on the engineering-centred system design aspect of the problem for ensuring anomaly-free synchronisation. Instead, we emphasise on the performance improvements with respect to (w.r.t.) various networking tasks for designing controller synchronisation policies. Specifically, we investigate various scenarios with diverse control objectives, which range from routing related performance metric to other more sophisticated optimisation goals involving communication and computation resources in networks. We also take into consideration factors such as the scalability and robustness of the policies developed. For this goal, we employ machine learning techniques to assist our policy designs. In particular, we model the SDN controller synchronisation policy as serial decision-making processes and resort to RL-based techniques for developing the synchronisation policy. To this end, we leverage a combination of various RL and DL methods, which are tailored for handling the specific characteristics and requirements in different scenarios. Evaluation results show that our designed policies consistently outperform some already in-use controller synchronisation policies, in certain cases by considerable margins. While exploring existing RL algorithms for solving our problems, we identify some critical issues embedded within these algorithms, such as the enormity of the state-action space, which can cause inefficiency in learning. As such, we propose a novel RL algorithm to address these issues, which is named state action separable reinforcement learning (sasRL). Therefore, the sasRL approach constitutes another major contribution of this thesis in the field of RL research.Open Acces

    Secure Audit Logs with Verifiable Excerpts

    Get PDF
    Log files are the primary source of information when the past peration of a computing system needs to be determined. Keeping correct and accurate log files is importantfor after-the-fact forensics, as well as for system administration, maintenance, and auditing. Therefore, a line of research has emerged on how to cryptographically protect the integrity of log files even against intruders who gain control of the logging machine. We contribute to this line of research by devising a scheme where one can verify integrity not only of the log file as a whole, but also of excerpts. This is helpful in various scenarios, including cloud provider auditing

    MOTIM – An Industrial Application Using NOCs

    Get PDF
    High-speed networks used to interconnect computers advance at an extraordinary pace, driven by the evolution of several contributing technologies. Due to the ever-increasing complexity of designing parts and equipments for these networks, design complexity management makes scalability and reusability more important issues than performance, in most cases. This paper describes MOTIM, a scalable and reusable architecture enabling the implementation of Ethernet switches with low latency and high throughput. The architecture is built around a network-on-chip-based switch fabric, which guarantees scalability. The architecture has been validated by functional simulation and prototyped in FPGAs. The experimental results show that even under severe traffic conditions the architecture achieves packet transmission with low latencies. Categories and Subject Descriptor

    Building regulatory compliant storage systems

    Full text link
    In the past decade, informational records have become entirely digital. These include financial statements, health care records, student records, private consumer information and other sensitive data. Because of the delicate nature of the data these records contain, Congress and the courts have begun to recognize the importance of properly storing and securing electronic records. Examples of legislation in-clude the Health Insurance Portability and Accountabilit
    • …
    corecore