1,327 research outputs found

    Ontology‐driven perspective of CFRaaS

    Get PDF
    A Cloud Forensic Readiness as a Service (CFRaaS) model allows an environment to preemptively accumulate relevant potential digital evidence (PDE) which may be needed during a post‐event response process. The benefit of applying a CFRaaS model in a cloud environment, is that, it is designed to prevent the modification/tampering of the cloud architectures or the infrastructure during the reactive process, which if it could, may end up having far‐reaching implications. The authors of this article present the reactive process as a very costly exercise when the infrastructure must be reprogrammed every time the process is conducted. This may hamper successful investigation from the forensic experts and law enforcement agencies perspectives. The CFRaaS model, in its current state, has not been presented in a way that can help to classify or visualize the different types of potential evidence in all the cloud deployable models, and this may limit the expectations of what or how the required PDE may be collected. To address this problem, the article presents the CFRaaS from a holistic ontology‐driven perspective, which allows the forensic experts to be able to apply the CFRaaS based on its simplicity of the concepts, relationship or semantics between different form of potential evidence, as well as how the security of a digital environment being investigated could be upheld. The CFRaaS in this context follows a fundamental ontology engineering approach that is based on the classical Resource Description Framework. The proposed ontology‐driven approach to CFRaaS is, therefore, a knowledge‐base that uses layer‐dependencies, which could be an essential toolkit for digital forensic examiners and other stakeholders in cloud‐security. The implementation of this approach could further provide a platform to develop other knowledge base components for cloud forensics and security

    Timeline2GUI: A Log2Timeline CSV Parser and Training Scenarios

    Get PDF
    Crimes involving digital evidence are getting more complex due to the increasing storage capacities and utilization of devices. Event reconstruction (i.e., understanding the timeline) is an essential step for investigators to understand a case where a prominent tool is Log2Timeline (a tool that creates super timelines which is a combination of several log files and events throughout a system). While these timelines provide great evidence and help to understand a case, they are complex and require tools as well as training scenarios. In this paper we present Timeline2GUI an easy-to-use python implementation to analyze CSV log files create by Log2Timeline. Additionally, we present three training scenarios – beginner, intermediate and advanced – to practice timeline analysis skills as well as familiarity with visualization tools. Lastly, we provide a comprehensive overview of tools

    Digital Forensics Event Graph Reconstruction

    Get PDF
    Ontological data representation and data normalization can provide a structured way to correlate digital artifacts. This can reduce the amount of data that a forensics examiner needs to process in order to understand the sequence of events that happened on the system. However, ontology processing suffers from large disk consumption and a high computational cost. This paper presents Property Graph Event Reconstruction (PGER), a novel data normalization and event correlation system that leverages a native graph database to improve the speed of queries common in ontological data. PGER reduces the processing time of event correlation grammars and maintains accuracy over a relational database storage format

    A Unified Forensics Analysis Approach to Digital Investigation

    Get PDF
    Digital forensics is now essential in addressing cybercrime and cyber-enabled crime but potentially it can have a role in almost every other type of crime. Given technology's continuous development and prevalence, the widespread adoption of technologies among society and the subsequent digital footprints that exist, the analysis of these technologies can help support investigations. The abundance of interconnected technologies and telecommunication platforms has significantly changed the nature of digital evidence. Subsequently, the nature and characteristics of digital forensic cases involve an enormous volume of data heterogeneity, scattered across multiple evidence sources, technologies, applications, and services. It is indisputable that the outspread and connections between existing technologies have raised the need to integrate, harmonise, unify and correlate evidence across data sources in an automated fashion. Unfortunately, the current state of the art in digital forensics leads to siloed approaches focussed upon specific technologies or support of a particular part of digital investigation. Due to this shortcoming, the digital investigator examines each data source independently, trawls through interconnected data across various sources, and often has to conduct data correlation manually, thus restricting the digital investigator’s ability to answer high-level questions in a timely manner with a low cognitive load. Therefore, this research paper investigates the limitations of the current state of the art in the digital forensics discipline and categorises common investigation crimes with the necessary corresponding digital analyses to define the characteristics of the next-generation approach. Based on these observations, it discusses the future capabilities of the next-generation unified forensics analysis tool (U-FAT), with a workflow example that illustrates data unification, correlation and visualisation processes within the proposed method.</jats:p

    The Evolution of Expressing and Exchanging Cyber-Investigation Information in a Standardized Form

    Get PDF
    The growing number of investigations involving digital traces from various data sources is driving the demand for a standard way to represent and exchange pertinent information. Enabling automated combination and correlation of cyber-investigation information from multiple systems or organizations enables more efficient and comprehensive analysis, reducing the risk of mistakes and missed opportunities. These needs are being met by the evolving open-source, community-developed specification language called CASE, the Cyber-investigation Analysis Standard Expression. CASE leverages the Unified Cyber Ontology (UCO), which abstracts and expresses concepts that are common across multiple domains. This paper introduces CASE and UCO, explaining how they improve upon prior related work. The value of fully-structured data, representing provenance, and action lifecycles are discussed. The guiding principles of CASE and UCO are presented, and illustrative examples of CASE are provided using the default JSON-LD serialization

    Conceptualizing the Role of IS Security Compliance in Projects of Digital Transformation: Tensions and Shifts Between Prevention and Response Modes

    Get PDF
    Research shows that information systems security operates between two main distinct functioning modes, either prevention before a security incident occurs, or response which follows from an incident, usually external to the organisation. In this paper, we argue that this shift between prevention and response modes also happens due to inherent internal tensions created between pressures for digital transformation and the established forces for security compliance. We show how a digital transformation project introduced a security incident and challenged the IS security compliance function, reflecting the two different approaches to IS security in organizations. We conduct a participatory observation study of the implementation of Robotic Process Automation (RPA) in a financial services organization. We examine the shift from prevention to response in this project and identify generative drivers of digital transformation, and drivers of IS security compliance. Our analysis leads to the development of a process model that explains how organizations move from prevention to response when faced with tensions between IS security compliance and digital transformation

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    A Comprehensive Analysis of the Role of Artificial Intelligence and Machine Learning in Modern Digital Forensics and Incident Response

    Full text link
    In the dynamic landscape of digital forensics, the integration of Artificial Intelligence (AI) and Machine Learning (ML) stands as a transformative technology, poised to amplify the efficiency and precision of digital forensics investigations. However, the use of ML and AI in digital forensics is still in its nascent stages. As a result, this paper gives a thorough and in-depth analysis that goes beyond a simple survey and review. The goal is to look closely at how AI and ML techniques are used in digital forensics and incident response. This research explores cutting-edge research initiatives that cross domains such as data collection and recovery, the intricate reconstruction of cybercrime timelines, robust big data analysis, pattern recognition, safeguarding the chain of custody, and orchestrating responsive strategies to hacking incidents. This endeavour digs far beneath the surface to unearth the intricate ways AI-driven methodologies are shaping these crucial facets of digital forensics practice. While the promise of AI in digital forensics is evident, the challenges arising from increasing database sizes and evolving criminal tactics necessitate ongoing collaborative research and refinement within the digital forensics profession. This study examines the contributions, limitations, and gaps in the existing research, shedding light on the potential and limitations of AI and ML techniques. By exploring these different research areas, we highlight the critical need for strategic planning, continual research, and development to unlock AI's full potential in digital forensics and incident response. Ultimately, this paper underscores the significance of AI and ML integration in digital forensics, offering insights into their benefits, drawbacks, and broader implications for tackling modern cyber threats

    Cyber indicators of compromise: a domain ontology for security information and event management

    Get PDF
    It has been said that cyber attackers are attacking at wire speed (very fast), while cyber defenders are defending at human speed (very slow). Researchers have been working to improve this asymmetry by automating a greater portion of what has traditionally been very labor-intensive work. This work is involved in both the monitoring of live system events (to detect attacks), and the review of historical system events (to investigate attacks). One technology that is helping to automate this work is Security Information and Event Management (SIEM). In short, SIEM technology works by aggregating log information, and then sifting through this information looking for event correlations that are highly indicative of attack activity. For example: Administrator successful local logon and (concurrently) Administrator successful remote logon. Such correlations are sometimes referred to as indicators of compromise (IOCs). Though IOCs for network-based data (i.e., packet headers and payload) are fairly mature (e.g., Snort's large rule-base), the field of end-device IOCs is still evolving and lacks any well-defined go-to standard accepted by all. This report addresses ontological issues pertaining to end-device IOCs development, including what they are, how they are defined, and what dominant early standards already exist.http://archive.org/details/cyberindicatorso1094553041Lieutenant, United States NavyApproved for public release; distribution is unlimited
    corecore