18,152 research outputs found

    PRECEPT: A Framework for Ethical Digital Forensics Investigations.

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Cyber-enabled crimes are on the increase, and law enforcement has had to expand many of their detecting activities into the digital domain. As such, the field of digital forensics has become far more sophisticated over the years and is now able to uncover even more evidence that can be used to support prosecution of cyber criminals in a court of law. Governments, too, have embraced the ability to track suspicious individuals in the online world. Forensics investigators are driven to gather data exhaustively, being under pressure to provide law enforcement with sufficient evidence to secure a conviction. Yet, there are concerns about the ethics and justice of untrammeled investigations on a number of levels. On an organizational level, unconstrained investigations could interfere with, and damage, the organization’s right to control the disclosure of their intellectual capital. On an individual level, those being investigated could easily have their legal privacy rights violated by forensics investigations. On a societal level, there might be a sense of injustice at the perceived inequality of current practice in this domain. This paper argues the need for a practical, ethically-grounded approach to digital forensic investigations, one that acknowledges and respects the privacy rights of individuals and the intellectual capital disclosure rights of organisations, as well as acknowledging the needs of law enforcement. We derive a set of ethical guidelines, then map these onto a forensics investigation framework. We subjected the framework to expert review in two stages, refining the framework after each stage. We conclude by proposing the refined ethically-grounded digital forensics investigation framework. Our treatise is primarily UK based, but the concepts presented here have international relevance and applicability. In this paper, the lens of justice theory is used to explore the tension that exists between the needs of digital forensic investigations into cybercrimes on the one hand, and, on the other, individuals’ rights to privacy and organizations’ rights to control intellectual capital disclosure. The investigation revealed a potential inequality between the practices of digital forensics investigators and the rights of other stakeholders. That being so, the need for a more ethically-informed approach to digital forensics investigations, as a remedy, is highlighted, and a framework proposed to provide this. Our proposed ethically-informed framework for guiding digital forensics investigations suggest a way of re-establishing the equality of the stakeholders in this arena, and ensuring that the potential for a sense of injustice is reduced. Justice theory is used to highlight the difficulties in squaring the circle between the rights and expectations of all stakeholders in the digital forensics arena. The outcome is the forensics investigation guideline, PRECEpt: Privacy-Respecting EthiCal framEwork, which provides the basis for a re-aligning of the balance between the requirements and expectations of digital forensic investigators on the one hand, and individual and organizational expectations and rights, on the other

    PRECEPT:a framework for ethical digital forensics investigations

    Get PDF
    Purpose: Cyber-enabled crimes are on the increase, and law enforcement has had to expand many of their detecting activities into the digital domain. As such, the field of digital forensics has become far more sophisticated over the years and is now able to uncover even more evidence that can be used to support prosecution of cyber criminals in a court of law. Governments, too, have embraced the ability to track suspicious individuals in the online world. Forensics investigators are driven to gather data exhaustively, being under pressure to provide law enforcement with sufficient evidence to secure a conviction. Yet, there are concerns about the ethics and justice of untrammeled investigations on a number of levels. On an organizational level, unconstrained investigations could interfere with, and damage, the organization’s right to control the disclosure of their intellectual capital. On an individual level, those being investigated could easily have their legal privacy rights violated by forensics investigations. On a societal level, there might be a sense of injustice at the perceived inequality of current practice in this domain. This paper argues the need for a practical, ethically-grounded approach to digital forensic investigations, one that acknowledges and respects the privacy rights of individuals and the intellectual capital disclosure rights of organisations, as well as acknowledging the needs of law enforcement. We derive a set of ethical guidelines, then map these onto a forensics investigation framework. We subjected the framework to expert review in two stages, refining the framework after each stage. We conclude by proposing the refined ethically-grounded digital forensics investigation framework. Our treatise is primarily UK based, but the concepts presented here have international relevance and applicability.Design methodology: In this paper, the lens of justice theory is used to explore the tension that exists between the needs of digital forensic investigations into cybercrimes on the one hand, and, on the other, individuals’ rights to privacy and organizations’ rights to control intellectual capital disclosure.Findings: The investigation revealed a potential inequality between the practices of digital forensics investigators and the rights of other stakeholders. That being so, the need for a more ethically-informed approach to digital forensics investigations, as a remedy, is highlighted, and a framework proposed to provide this.Practical Implications: Our proposed ethically-informed framework for guiding digital forensics investigations suggest a way of re-establishing the equality of the stakeholders in this arena, and ensuring that the potential for a sense of injustice is reduced.Originality/value: Justice theory is used to highlight the difficulties in squaring the circle between the rights and expectations of all stakeholders in the digital forensics arena. The outcome is the forensics investigation guideline, PRECEpt: Privacy-Respecting EthiCal framEwork, which provides the basis for a re-aligning of the balance between the requirements and expectations of digital forensic investigators on the one hand, and individual and organizational expectations and rights, on the other

    Taxonomy of Technological IT Outsourcing Risks: Support for Risk Identification and Quantification

    Get PDF
    The past decade has seen an increasing interest in IT outsourcing as it promises companies many economic benefits. In recent years, IT paradigms, such as Software-as-a-Service or Cloud Computing using third-party services, are increasingly adopted. Current studies show that IT security and data privacy are the dominant factors affecting the perceived risk of IT outsourcing. Therefore, we explicitly focus on determining the technological risks related to IT security and quality of service characteristics associated with IT outsourcing. We conducted an extensive literature review, and thoroughly document the process in order to reach high validity and reliability. 149 papers have been evaluated based on a review of the whole content and out of the finally relevant 68 papers, we extracted 757 risk items. Using a successive refinement approach, which involved reduction of similar items and iterative re-grouping, we establish a taxonomy with nine risk categories for the final 70 technological risk items. Moreover, we describe how the taxonomy can be used to support the first two phases of the IT risk management process: risk identification and quantification. Therefore, for each item, we give parameters relevant for using them in an existing mathematical risk quantification model

    The L3Pilot Data Management Toolchain for a Level 3 Vehicle Automation Pilot

    Get PDF
    As industrial research in automated driving is rapidly advancing, it is of paramount importance to analyze field data from extensive road tests. This paper investigates the design and development of a toolchain to process and manage experimental data to answer a set of research questions about the evaluation of automated driving functions at various levels, from technical system functioning to overall impact assessment. We have faced this challenge in L3Pilot, the first comprehensive test of automated driving functions (ADFs) on public roads in Europe. L3Pilot is testing ADFs in vehicles made by 13 companies. The tested functions are mainly of Society of Automotive Engineers (SAE) automation level 3, some of them of level 4. In this context, the presented toolchain supports various confidentiality levels, and allows cross-vehicle owner seamless data management, with the efficient storage of data and their iterative processing with a variety of analysis and evaluation tools. Most of the toolchain modules have been developed to a prototype version in a desktop/cloud environment, exploiting state-of-the-art technology. This has allowed us to efficiently set up what could become a comprehensive edge-to-cloud reference architecture for managing data in automated vehicle tests. The project has been released as open source, the data format into which all vehicular signals, recorded in proprietary formats, were converted, in order to support efficient processing through multiple tools, scalability and data quality checking. We expect that this format should enhance research on automated driving testing, as it provides a shared framework for dealing with data from collection to analysis. We are confident that this format, and the information provided in this article, can represent a reference for the design of future architectures to implement in vehicles

    Symmetric Encryption Algorithms: Review and Evaluation Study

    Get PDF
    The increased exchange of data over the Internet in the past two decades has brought data security and confidentiality to the fore front. Information security can be achieved by implementing encryption and decryption algorithms to ensure data remains secure and confidential, especially when transmitted over an insecure communication channel. Encryption is the method of coding information to prevent unauthorized access and ensure data integrity and confidentiality, whereas the reverse process is known as decryption. All encryption algorithms aim to secure data, however, their performance varies according to several factors such as file size, type, complexity, and platform used. Furthermore, while some encryption algorithms outperform others, they have been proven to be vulnerable against certain attacks. In this paper, we present a general overview of common encryption algorithms   and explain their inner workings. Additionally, we select ten different symmetric encryption algorithms and conduct a simulation in Java to test their performance. The algorithms we compare are: AES, BLOWFISH, RC2, RC4, RC6, DES, DESede, SEED, XTEA, and IDEA. We present the results of our simulation in terms of encryption speed, throughput, and CPU utilization rate for various file sizes ranging from 1MB to 1GB. We further analyze our results for all measures that have been tested, taking into account the level of security they provide

    Swarm Learning for decentralized and confidential clinical machine learning

    Get PDF
    Fast and reliable detection of patients with severe and heterogeneous illnesses is a major goal of precision medicine1,2. Patients with leukaemia can be identified using machine learning on the basis of their blood transcriptomes3. However, there is an increasing divide between what is technically possible and what is allowed, because of privacy legislation4,5. Here, to facilitate the integration of any medical data from any data owner worldwide without violating privacy laws, we introduce Swarm Learning—a decentralized machine-learning approach that unites edge computing, blockchain-based peer-to-peer networking and coordination while maintaining confidentiality without the need for a central coordinator, thereby going beyond federated learning. To illustrate the feasibility of using Swarm Learning to develop disease classifiers using distributed data, we chose four use cases of heterogeneous diseases (COVID-19, tuberculosis, leukaemia and lung pathologies). With more than 16,400 blood transcriptomes derived from 127 clinical studies with non-uniform distributions of cases and controls and substantial study biases, as well as more than 95,000 chest X-ray images, we show that Swarm Learning classifiers outperform those developed at individual sites. In addition, Swarm Learning completely fulfils local confidentiality regulations by design. We believe that this approach will notably accelerate the introduction of precision medicine
    • …
    corecore