1,684 research outputs found

    Enhancing credibility of digital evidence through provenance-based incident response handling

    Get PDF
    Digital forensics are becoming increasingly important for the investigation of computer-related crimes, white-collar crimes and massive hacker attacks. After an incident has been detected an appropriate incident response is usually initiated with the aim to mitigate the attack and ensure the recovery of the IT systems. Digital Forensics pursues the goal of acquiring evidence that will stand up in court for sentencing and sometimes opposes contradicting objectives of incident response approaches. The concept presented here provides a solution to strengthen the credibility of digital evidence during actions related to incident response. It adapts an approach for data provenance to accurately track the transformation of digital evidence. For this purpose, the affected system and the incident response systems are equipped with a whole system data provenance capturing mechanism and then data provenance is captured simultaneously during an incident response. Context information about the incident response is also documented. An adapted algorithm for sub-graph detection is used to identify similarities between two provenance graphs. By applying the proposed concept to a use case, the advantages are demonstrated and possibilities for further development are presented

    ForensiBlock: A Provenance-Driven Blockchain Framework for Data Forensics and Auditability

    Full text link
    Maintaining accurate provenance records is paramount in digital forensics, as they underpin evidence credibility and integrity, addressing essential aspects like accountability and reproducibility. Blockchains have several properties that can address these requirements. Previous systems utilized public blockchains, i.e., treated blockchain as a black box, and benefiting from the immutability property. However, the blockchain was accessible to everyone, giving rise to security concerns and moreover, efficient extraction of provenance faces challenges due to the enormous scale and complexity of digital data. This necessitates a tailored blockchain design for digital forensics. Our solution, Forensiblock has a novel design that automates investigation steps, ensures secure data access, traces data origins, preserves records, and expedites provenance extraction. Forensiblock incorporates Role-Based Access Control with Staged Authorization (RBAC-SA) and a distributed Merkle root for case tracking. These features support authorized resource access with an efficient retrieval of provenance records. Particularly, comparing two methods for extracting provenance records off chain storage retrieval with Merkle root verification and a brute-force search the offchain method is significantly better, especially as the blockchain size and number of cases increase. We also found that our distributed Merkle root creation slightly increases smart contract processing time but significantly improves history access. Overall, we show that Forensiblock offers secure, efficient, and reliable handling of digital forensic dataComment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    NLP-Based Techniques for Cyber Threat Intelligence

    Full text link
    In the digital era, threat actors employ sophisticated techniques for which, often, digital traces in the form of textual data are available. Cyber Threat Intelligence~(CTI) is related to all the solutions inherent to data collection, processing, and analysis useful to understand a threat actor's targets and attack behavior. Currently, CTI is assuming an always more crucial role in identifying and mitigating threats and enabling proactive defense strategies. In this context, NLP, an artificial intelligence branch, has emerged as a powerful tool for enhancing threat intelligence capabilities. This survey paper provides a comprehensive overview of NLP-based techniques applied in the context of threat intelligence. It begins by describing the foundational definitions and principles of CTI as a major tool for safeguarding digital assets. It then undertakes a thorough examination of NLP-based techniques for CTI data crawling from Web sources, CTI data analysis, Relation Extraction from cybersecurity data, CTI sharing and collaboration, and security threats of CTI. Finally, the challenges and limitations of NLP in threat intelligence are exhaustively examined, including data quality issues and ethical considerations. This survey draws a complete framework and serves as a valuable resource for security professionals and researchers seeking to understand the state-of-the-art NLP-based threat intelligence techniques and their potential impact on cybersecurity

    Safeguarding health data with enhanced accountability and patient awareness

    Get PDF
    Several factors are driving the transition from paper-based health records to electronic health record systems. In the United States, the adoption rate of electronic health record systems significantly increased after "Meaningful Use" incentive program was started in 2009. While increased use of electronic health record systems could improve the efficiency and quality of healthcare services, it can also lead to a number of security and privacy issues, such as identity theft and healthcare fraud. Such incidents could have negative impact on trustworthiness of electronic health record technology itself and thereby could limit its benefits. In this dissertation, we tackle three challenges that we believe are important to improve the security and privacy in electronic health record systems. Our approach is based on an analysis of real-world incidents, namely theft and misuse of patient identity, unauthorized usage and update of electronic health records, and threats from insiders in healthcare organizations. Our contributions include design and development of a user-centric monitoring agent system that works on behalf of a patient (i.e., an end user) and securely monitors usage of the patient's identity credentials as well as access to her electronic health records. Such a monitoring agent can enhance patient's awareness and control and improve accountability for health records even in a distributed, multi-domain environment, which is typical in an e-healthcare setting. This will reduce the risk and loss caused by misuse of stolen data. In addition to the solution from a patient's perspective, we also propose a secure system architecture that can be used in healthcare organizations to enable robust auditing and management over client devices. This helps us further enhance patients' confidence in secure use of their health data.PhDCommittee Chair: Mustaque Ahamad; Committee Member: Douglas M. Blough; Committee Member: Ling Liu; Committee Member: Mark Braunstein; Committee Member: Wenke Le

    Factuality Challenges in the Era of Large Language Models

    Full text link
    The emergence of tools based on Large Language Models (LLMs), such as OpenAI's ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered immense public attention. These incredibly useful, natural-sounding tools mark significant advances in natural language generation, yet they exhibit a propensity to generate false, erroneous, or misleading content -- commonly referred to as "hallucinations." Moreover, LLMs can be exploited for malicious applications, such as generating false but credible-sounding content and profiles at scale. This poses a significant challenge to society in terms of the potential deception of users and the increasing dissemination of inaccurate information. In light of these risks, we explore the kinds of technological innovations, regulatory reforms, and AI literacy initiatives needed from fact-checkers, news organizations, and the broader research and policy communities. By identifying the risks, the imminent threats, and some viable solutions, we seek to shed light on navigating various aspects of veracity in the era of generative AI.Comment: Our article offers a comprehensive examination of the challenges and risks associated with Large Language Models (LLMs), focusing on their potential impact on the veracity of information in today's digital landscap

    Digital Forensics Investigation Frameworks for Cloud Computing and Internet of Things

    Get PDF
    Rapid growth in Cloud computing and Internet of Things (IoT) introduces new vulnerabilities that can be exploited to mount cyber-attacks. Digital forensics investigation is commonly used to find the culprit and help expose the vulnerabilities. Traditional digital forensics tools and methods are unsuitable for use in these technologies. Therefore, new digital forensics investigation frameworks and methodologies are required. This research develops frameworks and methods for digital forensics investigations in cloud and IoT platforms

    CeFF: A Frameword for Forensics Enabled Cloud Investigation

    Get PDF
    Today, cloud computing has developed a transformative model for the organization, business, governments that brings huge potentials and turn into popular for pay as you go, on-demand service, scalability and efficient services. However, cloud computing has made the concern for forensic data because of the architecture of cloud system is not measured appropriately. Due to the distributed nature of the cloud system, many aspects relating to the forensic investigation such as data collection, data storage, crime target, data violation are difficult to achieve. Investigating the incidents in the cloud environment is a challenging task because the forensics investigator still needs to relay on the third party such as cloud service provider for performing their investigation tasks. It makes the overall forensic process difficult to complete with a duration and presented it to the court. Recently, there are some cloud forensics studies to address the challenges such as evidence collection, data acquisition, identifying the incidents and so on. However, still, there is a research gap in terms of consistency of analysing forensic evidence from distributed environment and methodology to analyse the forensic data in the cloud. This thesis contributes towards the direction of addressing the research gaps. In particular, this work proposes a forensic investigation framework CeFF: A framework for forensics enabled cloud investigation to investigate evidence in the cloud computing environment. The framework includes a set of concepts from organisational, technical and legal perspectives, which gives a holistic view of analysing cybercrime from organisation context where the crime has occurred through technical context and legal impact. The CeFF also includes a systematic process that uses the concept for performing the investigation. The cloud-enabled forensics framework meets all the forensics related requirement such as data collection, examination, presents the report, and identifies the potential risks that can consider while investigating the evidence in the cloud-computing environment. Finally, the proposed CeFF is applied to a real-life example to validate its applicability. The result shows that CeFF supports analysing the forensic data for a crime occurred in cloud-based system in a systematic way
    • …
    corecore