14 research outputs found

    Logchain: Blockchain-assisted Log Storage

    Full text link
    During the normal operation of a Cloud solution, no one usually pays attention to the logs except technical department, which may periodically check them to ensure that the performance of the platform conforms to the Service Level Agreements. However, the moment the status of a component changes from acceptable to unacceptable, or a customer complains about accessibility or performance of a platform, the importance of logs increases significantly. Depending on the scope of the issue, all departments, including management, customer support, and even the actual customer, may turn to logs to find out what has happened, how it has happened, and who is responsible for the issue. The party at fault may be motivated to tamper the logs to hide their fault. Given the number of logs that are generated by the Cloud solutions, there are many tampering possibilities. While tamper detection solution can be used to detect any changes in the logs, we argue that critical nature of logs calls for immutability. In this work, we propose a blockchain-based log system, called Logchain, that collects the logs from different providers and avoids log tampering by sealing the logs cryptographically and adding them to a hierarchical ledger, hence, providing an immutable platform for log storage.Comment: 4 pages, 1 figur

    Modeling and analyzing information integrity in safety critical systems

    Get PDF
    Preserving information integrity represent an urgent need for safety critical systems, where depending on incorrect or inconsistent information may leads to disasters. Typically, information integrity is a problem handled at technical level (e.g., checksumming). However, information integrity has to be analyzed in the social-technical context of the system, since information integrity related problems might manifest themselves in the business processes and actors interactions. In this paper, we propose an extended version of i*/ secure Tropos modeling languages to capture information integrity requirements. We illustrate the Datalog formalization of the proposed concepts and analysis techniques to support the analyst in the verification of integrity related properties. Air Traffic Management (ATM) case study is used throughout the paper. Document type: Part of book or chapter of boo

    Information Security Attributes & Securing Organizations

    Get PDF
    Information systems are evolving with rapid pace and it is easier and cheaper for organizations to acquire more systems and digitalize their business. Because of this, Information Security (InfoSec) is increasingly required in organizations. When there are more interconnected systems, databases and applications often accessible online, this leads to more attack vectors and possible security incidents. Incidents can be chained, leading from smaller initial incident into more critical ones, which could be avoided if the first incident did not occur, underlining the need for securing all assets. Regulators are also demanding security under penalty of fines as incentive to secure organizations. Security researches have continued to propose InfoSec attributes, which are elements of assets that need to be secured. Understanding these attributes helps organizations establish Information Security Management Systems, which are policies and guidelines for mitigating risks. These risks vary from malicious employees to natural disasters, and from espionage to cyber terrorism. Attacks towards humans in organizations are increasing, such as phising or impersonating another employee. Without proper tools and processes, organizations are not even able to tell whether they have had security incidents or not. With Information Security Management System it is possible to plan, implement, monitor and adjust security policies and controls. This system helps organizations to have comprehensive information security, including details of what security controls are being applied for each asset, how to monitor and detect incidents, and how to recover from them

    Analysis of information quality requirements in business processes, revisited

    Get PDF

    An Investigation of IBM PC Computer Viruses Infection Rates and Types in a Western Australian Environment

    Get PDF
    In recent years computer viruses have become increasingly significant as a form of computer abuse. By virtue of their reproductive capability, computer viruses can have cumulative and potentially catastrophic effects to the many people who use those affected computers. There is a growing concern in the computing community about these forms of electronic vandalism. This concern arises from the possible damage to stored information on which the work depends and the ensuing disruption of the work-place. Although the vandalism or purposeful abuse by introducing computer viruses to computer systems was originally mainly an American experience, research reports published by the Australian Computer Abuse Research Bureau (ACARB) support the claim that computer viruses have become increasingly significant as a form of computer abuse in Australia in recent years. Apart from ACARB\u27s figures, there is minimal empirical research of a similar nature being conducted to investigate computer viruses as a form of computer abuse in Australia. In this study, an attempt has been made to investigate the problem, albeit on a limited scope. In this study, the infection types and rates of IBM PC viruses in limited government IT organizations in Western Australia were investigated. In addition, this study has made an attempt to validate Spafford\u27s speculation that less than 10 viruses (out of a minimum of 374) account for 90% of infections in the Western Australian environment. This study was descriptive in nature in that a fact-finding survey based on questionnaires and standardized interviews was conducted in State Government IT organizations in Western Australia in order to obtain data on which the research findings can be based. The data gathering instrument for this study was a standardized questionnaire which comprised limited choice questions directed at obtaining such information as infection rates of various types of computer viruses. The questionnaire was field tested to eliminate ambiguous or biased items and to improve format, both for ease of understanding and facility in analyzing results. The questionnaire was used by the interviewer as a basis for the interview so that the potential for subjectivity and bias can be reduced. Before the commencement of this study, a letter of transmittal was sent to the prospective participants in order to request their participations. Confirmation of participation was sought through telephone calls. A very high response rate (87.5%, n = 42) for this study was achieved. This is taken as an assurance that reasonable representation of the state government sector for the study is achieved. Prior to commencement of this study, approval was sought from the University Committee for the Conduct of Ethical Research since this study will involve human subjects. During the interview, subjects were informed of the purpose of the study, that there will be no compulsion to participate in the study and that they will be free to withdraw from further participation in the study at any time they desire. The results of the survey and its implications are provided in chapters 5 and 6. In conclusion, the research ratifies the proposition that currently very few of the IBM PC viruses contribute to the vast majority of infections in the Western Australian work-place

    Construction, testing and use of checksum algorithms for computer virus detection

    Get PDF
    Call number: LD2668 .T4 CMSC 1989 V37Master of ScienceComputing and Information Science

    Privacy-preserving machine learning system at the edge

    Get PDF
    Data privacy in machine learning has become an urgent problem to be solved, along with machine learning's rapid development and the large attack surface being explored. Pre-trained deep neural networks are increasingly deployed in smartphones and other edge devices for a variety of applications, leading to potential disclosures of private information. In collaborative learning, participants keep private data locally and communicate deep neural networks updated on their local data, but still, the private information encoded in the networks' gradients can be explored by adversaries. This dissertation aims to perform dedicated investigations on privacy leakage from neural networks and to propose privacy-preserving machine learning systems for edge devices. Firstly, the systematization of knowledge is conducted to identify the key challenges and existing/adaptable solutions. Then a framework is proposed to measure the amount of sensitive information memorized in each layer's weights of a neural network based on the generalization error. Results show that, when considered individually, the last layers encode a larger amount of information from the training data compared to the first layers. To protect such sensitive information in weights, DarkneTZ is proposed as a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against neural networks. The performance of DarkneTZ is evaluated, including CPU execution time, memory usage, and accurate power consumption, using two small and six large image classification models. Due to the limited memory of the edge device's TEE, model layers are partitioned into more sensitive layers (to be executed inside the device TEE), and a set of layers to be executed in the untrusted part of the operating system. Results show that even if a single layer is hidden, one can provide reliable model privacy and defend against state of art membership inference attacks, with only a 3% performance overhead. This thesis further strengthens investigations from neural network weights (in on-device machine learning deployment) to gradients (in collaborative learning). An information-theoretical framework is proposed, by adapting usable information theory and considering the attack outcome as a probability measure, to quantify private information leakage from network gradients. The private original information and latent information are localized in a layer-wise manner. After that, this work performs sensitivity analysis over the gradients \wrt~private information to further explore the underlying cause of information leakage. Numerical evaluations are conducted on six benchmark datasets and four well-known networks and further measure the impact of training hyper-parameters and defense mechanisms. Last but not least, to limit the privacy leakages in gradients, I propose and implement a Privacy-preserving Federated Learning (PPFL) framework for mobile systems. TEEs are utilized on clients for local training, and on servers for secure aggregation, so that model/gradient updates are hidden from adversaries. This work leverages greedy layer-wise training to train each model's layer inside the trusted area until its convergence. The performance evaluation of the implementation shows that PPFL significantly improves privacy by defending against data reconstruction, property inference, and membership inference attacks while incurring small communication overhead and client-side system overheads. This thesis offers a better understanding of the sources of private information in machine learning and provides frameworks to fully guarantee privacy and achieve comparable ML model utility and system overhead with regular machine learning framework.Open Acces

    Design and evaluation of a self-configuring wireless mesh network architecture

    Get PDF
    Wireless network connectivity plays an increasingly important role in supporting our everyday private and professional lives. For over three decades, self-organizing wireless multi-hop ad-hoc networks have been investigated as a decentralized replacement for the traditional forms of wireless networks that rely on a wired infrastructure. However, despite the tremendous efforts of the international wireless research community and widespread availability of devices that are able to support these networks, wireless ad-hoc networks are hardly ever used. In this work, the reasons behind this discrepancy are investigated. It is found that several basic theoretical assumptions on ad-hoc networks prove to be wrong when solutions are deployed in reality, and that several basic functionalities are still missing. It is argued that a hierarchical wireless mesh network architecture, in which specialized, multi-interfaced mesh nodes form a reliable multi-hop wireless backbone for the less capable end-user clients is an essential step in bringing the ad-hoc networking concept one step closer to reality. Therefore, in a second part of this work, algorithms increasing the reliability and supporting the deployment and management of these wireless mesh networks are developed, implemented and evaluated, while keeping the observed limitations and practical considerations in mind. Furthermore, the feasibility of the algorithms is verified by experiment. The performance analysis of these protocols and the ability to deploy the developed algorithms on current generation off-the-shelf hardware indicates the successfulness of the followed research approach, which combines theoretical considerations with practical implementations and observations. However, it was found that there are also many pitfalls to using real-life implementation as a research technique. Therefore, in the last part of this work, a methodology for wireless network research using real-life implementation is developed, allowing researchers to generate more reliable protocols and performance analysis results with less effort

    Synchronization of data in heterogeneous decentralized systems

    Get PDF
    Data synchronization is the problem of reconciling the differences between large data stores that differ in a small number of records. It is a common thread among disparate distributed systems ranging from fleets of Internet of Things (IoT) devices to clusters of distributed databases in the cloud. Most recently, data synchronization has arisen in globally distributed public blockchains that build the basis for the envisioned decentralized Internet of the future. Moreover, the parallel development of edge computing has significantly increased the heterogeneity of networks and computing devices. The merger of highly heterogeneous system resources and the decentralized nature of future Internet applications calls for a new approach to data synchronization. In this dissertation, we look at the problem of data synchronization through the prism of set reconciliation and introduce novel tools and protocols that improve the performance of data synchronization in heterogeneous decentralized systems. First, we compare the analytical properties of the state-of-the-art set reconciliation protocols, and investigate the impact of theoretical assumptions and implementation decisions on the synchronization performance. Second, we introduce GenSync, the first unified set reconciliation middleware. Using GenSync's distinctive benchmarking layer, we find that the best protocol choice is highly sensitive to the system conditions, and a bad protocol choice causes a severe hit in performance. We showcase the evaluative power of GenSync in one of the world's largest wireless network emulators, and demonstrate choosing the best GenSync protocol under a high and low user mobility in an emulated cellular network. Finally, we introduce SREP (Set Reconciliation-Enhanced Propagation), a novel blockchain transaction pool synchronization protocol with quantifiable guarantees. Through simulations, we show that SREP incurs significantly smaller bandwidth overhead than a similar approach from the literature, especially in the networks of realistic sizes (tens of thousands of participants)
    corecore