23 research outputs found

    Detecting Live Forensic Memory Acquisition using Distinct Native Attribute Fingerprinting

    Get PDF
    This research investigates the anti-forensic aspects of live memory acquisition. In 2009, the anti-forensic tool Detect and Eliminate Computer Assisted Forensics (DECAF) was developed to defeat one of the forensic tools, which is well known in the law enforcement community, Computer Online Forensic Evidence Extrator (COFEE). DECAF uses signature based detection method to detect forensic tools, and then performs anti-forensic routines such as modifying evidence, disabling forensic software tools, shutting down the machine to avoid evidence collection, to name a few. The findings in the literature show that the signature based method and various anti-forensic methods have shortcomings. This led to review the application of artificial intelligence (AI) techniques, specifically machine learning algorithms, to the domain of anti-forensics. And, also that AI is not being applied to detect the live forensic acquisition process. This is the knowledge gap this study has identified and addressed to the extent that AI techniques can be applied to detect forensic memory acquisition on a Windows 10, 64-bit machine. The method that was adopted to address the knowledge gap was by formulating a hypothesis, that the memory (M), input and output (I/O), and central processing unit (C) parameters exhibit a distinct variation in MIOC pattern, known as distinctive native attribute (DNA) pattern or fingerprint, whilst memory is being acquired from the machine by a forensic tool. If these unique DNA patterns can be identified, then memory acquisition can be detected. To support the hypothesis of this study, an experiment was conducted to gather MIOC parameters, and machine learning (ML) algorithms were used to identify the DNA patterns. The results show that, adaptive (ADA) boost classifier has the least performance with a high detection error rate (Δr) of 73.4 percent and 89 percent on the memory and CPU dataset respectively. Whereas, linear discriminant analysis (LDA) classifier has the highest Δr of 78 percent on the I/O dataset. Random forest (RF) classifier has the least detection rate of less than 5 percent on the three datasets. To improve the performance of the ML algorithms the individual memory, I/O, and CPU datasets was integrated into the single MIOC dataset. This resulted in decreasing Δr of SVM, LDA, and ADA boost by 32 percent, 17.3 percent, and ADA 13 percent respectively. To further support the hypothesis, the MIOC dataset was transformed to images by using the concept of gramian angular field (GAF). Then, the DNA patterns were detected using the 3-layered convolution neural network (3L-CNN) model. The results show that the model detects DNA patterns from the I/O dataset with an accuracy, precision, and recall over 99 percent. Whereas, the model underforms on the CPU dataset with an accuracy of 64.94 percent with precision and recall of 69.50 and 44.20 percent respectively. The significance of DNA fingerprinting detection method is that, it not only shows that AI techniques can be applied to the anti-forensic domain but also highlights that forensic memory acquisition can be vulnerable to the DNA fingerprinting method. This implies, if memory acquisition could be detected using DNA fingerprinting, the process of live memory acquisition is defeated. Therefore, the investigator will not have crucial evidence to work with that could be found only in the memory. Another novel contribution this study makes is by proposing a mathematical formalism by which a digital forensic model (DFM) can be validated by counteracting the influence anti-forensic effects on various phases of the digital forensic process. Future work should focus on addressing live forensic acquisition vulnerabilities

    BYOD: Risk considerations in a South African organisation

    Get PDF
    In recent times, while numerous organisations have difficulty keeping abreast with the frequent year-on-year technology changes, their employees on the other hand, continue to bring their personal devices to work to more readily access organisational data. This concept is known as Bring Your Own Device (BYOD). Studies have demonstrated that the introduction of BYOD commonly has a positive effect on both organisation and employees: increased optimism, job satisfaction and productivity are some of the perceived positive effects. Furthermore, BYOD can improve employees’ opportunities for mobile working and assist with the work flexibility they seek. This phenomenon, however, is still not well understood. In the South African context, this refers particularly to an inadequate understanding of risks associated with the introduction of BYOD into organisations. Some of the risks associated with this phenomenon are, for instance, related to information security, legislation and privacy issues. Hence, the intention of this research was to investigate, determine and assess BYOD risk considerations in a South African organisation. Using the available literature on this subject and an interpretative exploratory case study approach, this research explored various facets of BYOD-related risks (e.g. implementational, technological, legislation, regulation and privacy risks, human aspects and organisational concerns) as well as the impact these risks may have on both employees and an organisation. The organisation under investigation – from this point onward referred to as “Organisation A” – is a South African based information technology (IT) security consulting and service management organisation, which has seen increased expansion in its business and thus an increase in the number of its employees utilising their personal devices at the workplace. Even so, Organisation A was uncertain regarding possible risks that might hinder benefits of BYOD. Hence, this researcher defined the main research question as “What are the risks of introducing the BYOD in the South African organisation and what is an effective approach to address identified risks?”. The main objective was to identify and describe BYOD-related risks and to propose an appropriate model for addressing these risks. To answer the main research question, this researcher reviewed the applicable literature on the BYOD, including the limited South African literature pertaining to the subject. The review elicited the most common BYOD-related risks but also some models, frameworks and standards that may be applied for addressing these risks. Based on these revelations, an applicable BYOD risk management model was created and proposed. The literature review findings were subsequently tested in the empirical setting (in Organisation A) by conducting comprehensive interviews with research participants. This research adopted a qualitative approach in general and a case study methodology in particular. The collected data were analysed using the interpretative phenomenological analysis (IPA), which aided in providing a comprehensive understanding of the interviewees’ responses regarding the BYOD risks. The interviewees were selected based on a purposeful (pre-defined) sampling. The results of this interpretative research suggest that the interviewees’ responses are closely aligned with the information on BYOD risks collected from the pertinent literature. The results show that successful introduction and usage of BYOD in the studied organisation requires the implementation of mixed risk management measures: technological (e.g. mobile device management and its additional components), non-technological (e.g. IT or BYOD security policies), the usage of general risk management frameworks (e.g. ISO 27001), the development of an organisational security culture and skilling of the human factor (e.g. employee awareness, training and education, for example). Additionally, it was found that participation of employees in the development of BYOD policies is an essential and effective tactic for transforming a fragile BYOD risk link (i.e. employees) into a strong risk prevention mechanism. Furthermore, this research also revealed that in the South African context, it is important that an organisation’s BYOD security policies are sound, preferably meeting the POPI Act requirements and thereby avoiding legislation risks. The contribution of this research is twofold: first academic, and second, practical. The academic contribution is realised by adding to the body of knowledge on the BYOD risks – most particularly in terms of understanding potential risks when introducing BYOD in the South African context. The practical contribution manifests through the provision of detailed risk considerations and mitigation guidelines for organisations wishing to introduce BYOD practices or considering ways to improve their current BYOD risk management strategy. It is acknowledged that this research has some limitations, particularly in regard to the limited generalisation of the findings due to the limited sample provided by only one organisation. Although the results are not necessarily applicable to other South African organisations, these limitations did not impact the relevance and validity of this research

    Distributed services across the network from edge to core

    Get PDF
    The current internet architecture is evolving from a simple carrier of bits to a platform able to provide multiple complex services running across the entire Network Service Provider (NSP) infrastructure. This calls for increased flexibility in resource management and allocation to provide dedicated, on-demand network services, leveraging a distributed infrastructure consisting of heterogeneous devices. More specifically, NSPs rely on a plethora of low-cost Customer Premise Equipment (CPE), as well as more powerful appliances at the edge of the network and in dedicated data-centers. Currently a great research effort is spent to provide this flexibility through Fog computing, Network Functions Virtualization (NFV), and data plane programmability. Fog computing or Edge computing extends the compute and storage capabilities to the edge of the network, closer to the rapidly growing number of connected devices and applications that consume cloud services and generate massive amounts of data. A complementary technology is NFV, a network architecture concept targeting the execution of software Network Functions (NFs) in isolated Virtual Machines (VMs), potentially sharing a pool of general-purpose hosts, rather than running on dedicated hardware (i.e., appliances). Such a solution enables virtual network appliances (i.e., VMs executing network functions) to be provisioned, allocated a different amount of resources, and possibly moved across data centers in little time, which is key in ensuring that the network can keep up with the flexibility in the provisioning and deployment of virtual hosts in today’s virtualized data centers. Moreover, recent advances in networking hardware have introduced new programmable network devices that can efficiently execute complex operations at line rate. As a result, NFs can be (partially or entirely) folded into the network, speeding up the execution of distributed services. The work described in this Ph.D. thesis aims at showing how various network services can be deployed throughout the NSP infrastructure, accommodating to the different hardware capabilities of various appliances, by applying and extending the above-mentioned solutions. First, we consider a data center environment and the deployment of (virtualized) NFs. In this scenario, we introduce a novel methodology for the modelization of different NFs aimed at estimating their performance on different execution platforms. Moreover, we propose to extend the traditional NFV deployment outside of the data center to leverage the entire NSP infrastructure. This can be achieved by integrating native NFs, commonly available in low-cost CPEs, with an existing NFV framework. This facilitates the provision of services that require NFs close to the end user (e.g., IPsec terminator). On the other hand, resource-hungry virtualized NFs are run in the NSP data center, where they can take advantage of the superior computing and storage capabilities. As an application, we also present a novel technique to deploy a distributed service, specifically a web filter, to leverage both the low latency of a CPE and the computational power of a data center. We then show that also the core network, today dedicated solely to packet routing, can be exploited to provide useful services. In particular, we propose a novel method to provide distributed network services in core network devices by means of task distribution and a seamless coordination among the peers involved. The aim is to transform existing network nodes (e.g., routers, switches, access points) into a highly distributed data acquisition and processing platform, which will significantly reduce the storage requirements at the Network Operations Center and the packet duplication overhead. Finally, we propose to use new programmable network devices in data center networks to provide much needed services to distributed applications. By offloading part of the computation directly to the networking hardware, we show that it is possible to reduce both the network traffic and the overall job completion time

    CLASSIFYING AND RESPONDING TO NETWORK INTRUSIONS

    Get PDF
    Intrusion detection systems (IDS) have been widely adopted within the IT community, as passive monitoring tools that report security related problems to system administrators. However, the increasing number and evolving complexity of attacks, along with the growth and complexity of networking infrastructures, has led to overwhelming numbers of IDS alerts, which allow significantly smaller timeframe for a human to respond. The need for automated response is therefore very much evident. However, the adoption of such approaches has been constrained by practical limitations and administrators' consequent mistrust of systems' abilities to issue appropriate responses. The thesis presents a thorough analysis of the problem of intrusions, and identifies false alarms as the main obstacle to the adoption of automated response. A critical examination of existing automated response systems is provided, along with a discussion of why a new solution is needed. The thesis determines that, while the detection capabilities remain imperfect, the problem of false alarms cannot be eliminated. Automated response technology must take this into account, and instead focus upon avoiding the disruption of legitimate users and services in such scenarios. The overall aim of the research has therefore been to enhance the automated response process, by considering the context of an attack, and investigate and evaluate a means of making intelligent response decisions. The realisation of this objective has included the formulation of a response-oriented taxonomy of intrusions, which is used as a basis to systematically study intrusions and understand the threats detected by an IDS. From this foundation, a novel Flexible Automated and Intelligent Responder (FAIR) architecture has been designed, as the basis from which flexible and escalating levels of response are offered, according to the context of an attack. The thesis describes the design and operation of the architecture, focusing upon the contextual factors influencing the response process, and the way they are measured and assessed to formulate response decisions. The architecture is underpinned by the use of response policies which provide a means to reflect the changing needs and characteristics of organisations. The main concepts of the new architecture were validated via a proof-of-concept prototype system. A series of test scenarios were used to demonstrate how the context of an attack can influence the response decisions, and how the response policies can be customised and used to enable intelligent decisions. This helped to prove that the concept of flexible automated response is indeed viable, and that the research has provided a suitable contribution to knowledge in this important domain

    Real-Time Client-Side Phishing Prevention

    Get PDF
    In the last decades researchers and companies have been working to deploy effective solutions to steer users away from phishing websites. These solutions are typically based on servers or blacklisting systems. Such approaches have several drawbacks: they compromise user privacy, rely on off-line analysis, are not robust against adaptive attacks and do not provide much guidance to the users in their warnings. To address these limitations, we developed a fast real-time client-side phishing prevention software that implements a phishing detection technique recently developed by Marchal et al. It extracts information from the visited webpage and detects if it is a phish to warn the user. It is also able to detect the website that the phish is trying to mimic and propose a redirection to the legitimate domain. Furthermore, to attest the validity of our solution we performed two user studies to evaluate the usability of the interface and the program's impact on user experience

    Cyber-Physical Threat Intelligence for Critical Infrastructures Security

    Get PDF
    Modern critical infrastructures can be considered as large scale Cyber Physical Systems (CPS). Therefore, when designing, implementing, and operating systems for Critical Infrastructure Protection (CIP), the boundaries between physical security and cybersecurity are blurred. Emerging systems for Critical Infrastructures Security and Protection must therefore consider integrated approaches that emphasize the interplay between cybersecurity and physical security techniques. Hence, there is a need for a new type of integrated security intelligence i.e., Cyber-Physical Threat Intelligence (CPTI). This book presents novel solutions for integrated Cyber-Physical Threat Intelligence for infrastructures in various sectors, such as Industrial Sites and Plants, Air Transport, Gas, Healthcare, and Finance. The solutions rely on novel methods and technologies, such as integrated modelling for cyber-physical systems, novel reliance indicators, and data driven approaches including BigData analytics and Artificial Intelligence (AI). Some of the presented approaches are sector agnostic i.e., applicable to different sectors with a fair customization effort. Nevertheless, the book presents also peculiar challenges of specific sectors and how they can be addressed. The presented solutions consider the European policy context for Security, Cyber security, and Critical Infrastructure protection, as laid out by the European Commission (EC) to support its Member States to protect and ensure the resilience of their critical infrastructures. Most of the co-authors and contributors are from European Research and Technology Organizations, as well as from European Critical Infrastructure Operators. Hence, the presented solutions respect the European approach to CIP, as reflected in the pillars of the European policy framework. The latter includes for example the Directive on security of network and information systems (NIS Directive), the Directive on protecting European Critical Infrastructures, the General Data Protection Regulation (GDPR), and the Cybersecurity Act Regulation. The sector specific solutions that are described in the book have been developed and validated in the scope of several European Commission (EC) co-funded projects on Critical Infrastructure Protection (CIP), which focus on the listed sectors. Overall, the book illustrates a rich set of systems, technologies, and applications that critical infrastructure operators could consult to shape their future strategies. It also provides a catalogue of CPTI case studies in different sectors, which could be useful for security consultants and practitioners as well

    Cyber-Physical Threat Intelligence for Critical Infrastructures Security

    Get PDF
    Modern critical infrastructures can be considered as large scale Cyber Physical Systems (CPS). Therefore, when designing, implementing, and operating systems for Critical Infrastructure Protection (CIP), the boundaries between physical security and cybersecurity are blurred. Emerging systems for Critical Infrastructures Security and Protection must therefore consider integrated approaches that emphasize the interplay between cybersecurity and physical security techniques. Hence, there is a need for a new type of integrated security intelligence i.e., Cyber-Physical Threat Intelligence (CPTI). This book presents novel solutions for integrated Cyber-Physical Threat Intelligence for infrastructures in various sectors, such as Industrial Sites and Plants, Air Transport, Gas, Healthcare, and Finance. The solutions rely on novel methods and technologies, such as integrated modelling for cyber-physical systems, novel reliance indicators, and data driven approaches including BigData analytics and Artificial Intelligence (AI). Some of the presented approaches are sector agnostic i.e., applicable to different sectors with a fair customization effort. Nevertheless, the book presents also peculiar challenges of specific sectors and how they can be addressed. The presented solutions consider the European policy context for Security, Cyber security, and Critical Infrastructure protection, as laid out by the European Commission (EC) to support its Member States to protect and ensure the resilience of their critical infrastructures. Most of the co-authors and contributors are from European Research and Technology Organizations, as well as from European Critical Infrastructure Operators. Hence, the presented solutions respect the European approach to CIP, as reflected in the pillars of the European policy framework. The latter includes for example the Directive on security of network and information systems (NIS Directive), the Directive on protecting European Critical Infrastructures, the General Data Protection Regulation (GDPR), and the Cybersecurity Act Regulation. The sector specific solutions that are described in the book have been developed and validated in the scope of several European Commission (EC) co-funded projects on Critical Infrastructure Protection (CIP), which focus on the listed sectors. Overall, the book illustrates a rich set of systems, technologies, and applications that critical infrastructure operators could consult to shape their future strategies. It also provides a catalogue of CPTI case studies in different sectors, which could be useful for security consultants and practitioners as well

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
    corecore