25 research outputs found

    Computer Forensics Community : A Case Study

    Get PDF
    Computer forensics is a multidisciplinary program which requires a specific curriculum design. Many researchers have focused their attention to the creation and development of such curriculum in order to improve teaching standards in computer forensics (Bashir M. & Campbell R., 2015; Kessler C. & Schirling E., 2006). The educators in computer forensics are facing with many challenges both educational and technical such as balancing training and education (Cooper et al.,2010; Gottschalk et al., 2005), lack of an adequate textbook on digital forensics (Liu, 2006), finding qualified faculty (Gottschalk et al.,2005; Liu, 2006), lab setup (Gottschalk et al.,2005; Liu, 2006), and selecting appropriate prerequisites (Chi et al., 2010; Liu, 2006). Although, theoretically, an advanced computer forensics curriculum should improve the students’ engagement in the program and there has been substantial advancements in terms of novel research and publicly available information and tools but these attempts can be complemented and enhanced by introducing extracurricular activities (i.e. directly working alongside the students to improve their engagement). This study is focusing on the students’ engagements as well as consistently improving the computer forensics curriculum since practically, each one cannot be achieved without taking into account the other one. In the last three years, computer forensics and security students at Sheffield Hallam University have been involved with many initiatives and activities in order to enhance their learning outcomes. Most of these activities were based on the findings in the research carried out by Bagher Zaheh P. & Zargari S. (2015) where a few factors found to be quite significant in the students’ engagement. Briefly, these factors indicate that the students become more engaged with the program if the teaching materials are interesting (& hands on) and they are related to industry as well as having enthusiastic and caring tutors. In order to embed these factors into the program, a SHU computer forensics community (i.e. current students and graduates from previous years; sense of belonging) were formed where the students became a part of teaching assistants. The students are working hard alongside the tutors to develop different lab activities as well as taking responsibilities to support themselves (taking ownership of their learning) and the other students in lower years. The Computer Expert Witness (CEW) module, taught in the final year is a typical example where the students are put into pairs and they develop lab activities for the other students in lower levels. These students are required to design and deliver some sophisticated practical forensics activities in a manner that the attendees who are (or are supposed to be) non-technical individuals understand and be able to carry out the lab activities. The attendees are supposed to act as jury and ask different questions. Therefore, the students who are taking the CEW module gain some experience of being expert witnesses and the other students in lower levels become involved with some extra activities, enhancing their learning experience. Moreover, the involvement of forensics experts in industry with the course has increased the students’ motivation and as a results the rates of the placement and employment have substantially increased from the last two years. From employability perspective, lack of work experience is one of the challenges that computer forensics graduates are facing with therefore, in order to facilitate such an environment to gain work experience an IT company, called SHU IT & Forensics Services were setup by the students providing different IT services within the university and outside. This is ongoing project being most favoured by the forensic students at Hallam

    Using open source forensic carving tools on split dd and EWF files.

    Get PDF
    This study tests a number of open source forensic carving tools to determine their viability when run across split raw forensic images (dd) and Expert Witness Compression Format (EWF) images. This is done by carving files from a raw dd file to determine the baseline before running each tool over the different image types and analysing the results. A framework is then written in python to allow Scalpel to be run across any split dd image, whilst simultaneously concatenating the carved files and sorting by file type. This study tests the framework on a number of scenarios and concludes that this is an effective method of carving files using Scalpel over split dd images

    Feature Selection in UNSW-NB15 and KDDCUP’99 datasets

    Get PDF
    Machine learning and data mining techniques have been widely used in order to improve network intrusion detection in recent years. These techniques make it possible to automate anomaly detection in network traffics. One of the major problems that researchers are facing is the lack of published data available for research purposes. The KDD’99 dataset was used by researchers for over a decade even though this dataset was suffering from some reported shortcomings and it was criticized by few researchers. In 2009, Tavallaee M. et al. proposed a new dataset (NSL-KDD) extracted from the KDD’99 dataset in order to improve the dataset where it can be used for carrying out research in anomaly detection. The UNSW-NB15 dataset is the latest published dataset which was created in 2015 for research purposes in intrusion detection. This research is analysing the features included in the UNSW-NB15 dataset by employing machine learning techniques and exploring significant features (curse of high dimensionality) by which intrusion detection can be improved in network systems. Therefore, the existing irrelevant and redundant features are omitted from the dataset resulting not only faster training and testing process but also less resource consumption while maintaining high detection rates. A subset of features is proposed in this study and the findings are compared with the previous work in relation to features selection in the KDD’99 dataset

    An iterative multiple sampling method for intrusion detection

    Get PDF
    Threats to network security increase with growing volumes and velocity of data across networks, and they present challenges not only to law enforcement agencies, but to businesses, families and individuals. The volume, velocity and veracity of shared data across networks entail accurate and reliable automated tools for filtering out useful from malicious, noisy or irrelevant data. While data mining and machine learning techniques have widely been adopted within the network security community, challenges and gaps in knowledge extraction from data have remained due to insufficient data sources on attacks on which to test the algorithms accuracy and reliability. We propose a data-flow adaptive approach to intrusion detection based on high-dimensional cyber-attacks data. The algorithm repeatedly takes random samples from an inherently bi-modal, high-dimensional dataset of 82,332 observations on 25 numeric and two categorical variables. Its main idea is to capture subtle information resulting from reduced data dimension of a large number of malicious flows and by iteratively estimating roles played by individual variables in construction of key components. Data visualization and numerical results provide a clear separation of a set of variables associated with attack types and show that component-dominating parameters are crucial in monitoring future attacks

    A Repeated Sampling and Clustering Method for Intrusion Detection

    Get PDF
    Various tools, methods and techniques have been developed in recent years to deal with intrusion detection and ensure network security. However, despite all these efforts, gaps remain, apparently due to insufficient data sources on attacks on which to train and test intrusion detection algorithms. We propose a data-flow adaptive method for intrusion detection based on searching through high-dimensional dataset for naturally arising structures. The algorithm is trained on a subset of 82332 observations on 25 numeric variables and one cyber-attack label and tested on another large subset of similar structure. Its novelty derives from iterative estimation of cluster centroids, variability and proportions based on repeated sampling. Data visualisation and numerical results provide a clear separation of a set of variables associated with two types of attacks. We highlight the algorithm’s potential extensions – its allurement to predictive modelling and adaptation to other dimensional-reduction techniques

    Memory Forensics

    Get PDF
    Memory forensics is rapidly becoming a critical part of all digital forensic investigations. The value of information stored within a computer’s memory is immense; failing to capture it could result in a substantial loss of evidence. However, it is becoming increasingly more common to find situations where standard memory acquisition tools do not work. The paper addresses how an investigator can capture the memory of a locked computer when authentication is not present. The proposed solution is to use a bootable memory acquisition tool, in this case, Passware Bootable Memory Imager. To enhance the findings, three different reboot methods will be tested to help identify what would happen if the recommended warm reboot is not possible. Using a warm reboot and a secure reboot, Passware Bootable Memory Imager was able to successfully acquire the memory of the locked machine, with the resulting captures being highly representative of the populated data. However, the memory samples collected after a cold reboot did not retain any populated data. These findings highlight that to capture the memory of a locked machine, the reboot method is highly successful, providing the correct method is followed.Memory forensics is rapidly becoming a critical part of all digital forensic investigations. The value of information stored within a computer’s memory is immense; failing to capture it could result in a substantial loss of evidence. However, it is becoming increasingly more common to find situations where standard memory acquisition tools do not work. The paper addresses how an investigator can capture the memory of a locked computer when authentication is not present. The proposed solution is to use a bootable memory acquisition tool, in this case, Passware Bootable Memory Imager. To enhance the findings, three different reboot methods will be tested to help identify what would happen if the recommended warm reboot is not possible. Using a warm reboot and a secure reboot, Passware Bootable Memory Imager was able to successfully acquire the memory of the locked machine, with the resulting captures being highly representative of the populated data. However, the memory samples collected after a cold reboot did not retain any populated data. These findings highlight that to capture the memory of a locked machine, the reboot method is highly successful, providing the correct method is followed

    An overview and computer forensic challenges in image steganography

    Get PDF
    The development of powerful imaging tools, editing images for changing their data content is becoming a mark to undertake. Tempering image contents by adding, removing, or copying/moving without leaving a trace or unable to be discovered by the investigation is an issue in the computer forensic world. The protection of information shared on the Internet like images and any other confidential information is very significant. Nowadays, forensic image investigation tools and techniques objective is to reveal the tempering strategies and restore the firm belief in the reliability of digital media. This paper investigates the challenges of detecting steganography in computer forensics. Open source tools were used to analyze these challenges. The experimental investigation focuses on using steganography applications that use same algorithms to hide information exclusively within an image. The research finding denotes that, if a certain steganography tool A is used to hide some information within a picture, and then tool B which uses the same procedure would not be able to recover the embedded image

    Sentiment aware fake news detection on online social networks

    Get PDF
    Messages posted to online social networks (OSNs) causes a recent stir due to the intended spread of fake news or rumor. In this work, we aim to understand and analyse the characteristics of fake news especially in relation to sentiments, to determine the automatic detection of fake news and rumors. Based on empirical observation, we propose a hypothesis that there exists a relation between a fake message/rumour and the sentiment of the texts posted online. We verify our hypothesis by comparing with the state-of-the-art baseline text-only fake news detection methods that do not consider sentiments. We performed experiments on standard Twitter fake news dataset and show good improvements in detecting fake news/rumor

    An Ensemble Method for Intrusion Detection with Conformity to Data Variability

    Get PDF
    The high volume of traffic across modern networks entails use of accurate and reliable automated tools for intrusion detection. The capacity for data mining and machine learning algorithms to learn rules from data are typically constrained by the random nature of training and test data; diversity and disparity of models and related parameters and limitations in data sharing. We propose an ensemble method for intrusion detection which conforms to variability in data. Trained on a high-dimensional 82332x27 data attributes cyber-attack data variables for classification by Decision Trees (DT). Its novelty derives from iterative training and testing several DT models on multiple high-dimensional samples aimed at separating the types of attacks. Unlike Random Forests, the number of variables, p, isn’t altered to enable identification of the importance of predictor variables. It also minimises the influence of multicollinearity and strength of individual trees. Results show that the ensemble model conforms to data variability and yields more insightful predictions on multinomial targets
    corecore