785 research outputs found

    Hepatitis C Screening in the Baby Boomer Population

    Get PDF
    https://scholarworks.uvm.edu/fmclerk/1435/thumbnail.jp

    Field development strategies for Bakken shale formation

    Get PDF
    Bakken shale has been subjected to more attention during the last decade. Recently released reports discussing the high potential of the Bakken formation coupled with advancements in horizontal drilling, increased the interest of oil companies for investment in this field. Bakken formation is comprised of three layers. In this study upper and middle parts are the core of attention. Middle member which is believed to be the main reserve is mostly a limestone and the upper member is black shale. The upper member plays as a source and seal which has been subject to production in some parts as well.;In this study, we implement Top-Down Intelligent Reservoir Modeling technique to a part of Bakken shale formation in Williston basin of North Dakota. In this study, two different Top-Down approaches have been followed for building reservoir models: Static Reservoir Modeling and Spontaneous History Matching-Predictive Modeling. This innovative technique utilizes a combination of conventional reservoir engineering methods, data mining and artificial intelligence to analyze the available data and to build a full field model that can be used for field development. Unlike conventional reservoir simulation techniques which require wide range of reservoir characteristics and geological data; Top-Down modeling utilizes the publicly available data (minimum required data: production data and well logs) in order to generate reservoir model. The model accuracy can be enhanced as more detail data becomes available. The model can be used for proposing development strategies.;Static and predictive reservoir models for Bakken Shale formation are developed. The static reservoir model is then used to identify remaining reserves and sweet spots that can help operators identify infill locations. Furthermore economical analysis for some proposed new wells is performed. The intelligent predictive model was trained, calibrated and verified using production, log and completion data. The history matched predictive model can be further implemented for predicting the production

    Using open source forensic carving tools on split dd and EWF files.

    Get PDF
    This study tests a number of open source forensic carving tools to determine their viability when run across split raw forensic images (dd) and Expert Witness Compression Format (EWF) images. This is done by carving files from a raw dd file to determine the baseline before running each tool over the different image types and analysing the results. A framework is then written in python to allow Scalpel to be run across any split dd image, whilst simultaneously concatenating the carved files and sorting by file type. This study tests the framework on a number of scenarios and concludes that this is an effective method of carving files using Scalpel over split dd images

    Work, aging, mental fatigue, and eye movement dynamics

    Get PDF

    Computer Forensics Community : A Case Study

    Get PDF
    Computer forensics is a multidisciplinary program which requires a specific curriculum design. Many researchers have focused their attention to the creation and development of such curriculum in order to improve teaching standards in computer forensics (Bashir M. & Campbell R., 2015; Kessler C. & Schirling E., 2006). The educators in computer forensics are facing with many challenges both educational and technical such as balancing training and education (Cooper et al.,2010; Gottschalk et al., 2005), lack of an adequate textbook on digital forensics (Liu, 2006), finding qualified faculty (Gottschalk et al.,2005; Liu, 2006), lab setup (Gottschalk et al.,2005; Liu, 2006), and selecting appropriate prerequisites (Chi et al., 2010; Liu, 2006). Although, theoretically, an advanced computer forensics curriculum should improve the students’ engagement in the program and there has been substantial advancements in terms of novel research and publicly available information and tools but these attempts can be complemented and enhanced by introducing extracurricular activities (i.e. directly working alongside the students to improve their engagement). This study is focusing on the students’ engagements as well as consistently improving the computer forensics curriculum since practically, each one cannot be achieved without taking into account the other one. In the last three years, computer forensics and security students at Sheffield Hallam University have been involved with many initiatives and activities in order to enhance their learning outcomes. Most of these activities were based on the findings in the research carried out by Bagher Zaheh P. & Zargari S. (2015) where a few factors found to be quite significant in the students’ engagement. Briefly, these factors indicate that the students become more engaged with the program if the teaching materials are interesting (& hands on) and they are related to industry as well as having enthusiastic and caring tutors. In order to embed these factors into the program, a SHU computer forensics community (i.e. current students and graduates from previous years; sense of belonging) were formed where the students became a part of teaching assistants. The students are working hard alongside the tutors to develop different lab activities as well as taking responsibilities to support themselves (taking ownership of their learning) and the other students in lower years. The Computer Expert Witness (CEW) module, taught in the final year is a typical example where the students are put into pairs and they develop lab activities for the other students in lower levels. These students are required to design and deliver some sophisticated practical forensics activities in a manner that the attendees who are (or are supposed to be) non-technical individuals understand and be able to carry out the lab activities. The attendees are supposed to act as jury and ask different questions. Therefore, the students who are taking the CEW module gain some experience of being expert witnesses and the other students in lower levels become involved with some extra activities, enhancing their learning experience. Moreover, the involvement of forensics experts in industry with the course has increased the students’ motivation and as a results the rates of the placement and employment have substantially increased from the last two years. From employability perspective, lack of work experience is one of the challenges that computer forensics graduates are facing with therefore, in order to facilitate such an environment to gain work experience an IT company, called SHU IT & Forensics Services were setup by the students providing different IT services within the university and outside. This is ongoing project being most favoured by the forensic students at Hallam

    A Repeated Sampling and Clustering Method for Intrusion Detection

    Get PDF
    Various tools, methods and techniques have been developed in recent years to deal with intrusion detection and ensure network security. However, despite all these efforts, gaps remain, apparently due to insufficient data sources on attacks on which to train and test intrusion detection algorithms. We propose a data-flow adaptive method for intrusion detection based on searching through high-dimensional dataset for naturally arising structures. The algorithm is trained on a subset of 82332 observations on 25 numeric variables and one cyber-attack label and tested on another large subset of similar structure. Its novelty derives from iterative estimation of cluster centroids, variability and proportions based on repeated sampling. Data visualisation and numerical results provide a clear separation of a set of variables associated with two types of attacks. We highlight the algorithm’s potential extensions – its allurement to predictive modelling and adaptation to other dimensional-reduction techniques

    An iterative multiple sampling method for intrusion detection

    Get PDF
    Threats to network security increase with growing volumes and velocity of data across networks, and they present challenges not only to law enforcement agencies, but to businesses, families and individuals. The volume, velocity and veracity of shared data across networks entail accurate and reliable automated tools for filtering out useful from malicious, noisy or irrelevant data. While data mining and machine learning techniques have widely been adopted within the network security community, challenges and gaps in knowledge extraction from data have remained due to insufficient data sources on attacks on which to test the algorithms accuracy and reliability. We propose a data-flow adaptive approach to intrusion detection based on high-dimensional cyber-attacks data. The algorithm repeatedly takes random samples from an inherently bi-modal, high-dimensional dataset of 82,332 observations on 25 numeric and two categorical variables. Its main idea is to capture subtle information resulting from reduced data dimension of a large number of malicious flows and by iteratively estimating roles played by individual variables in construction of key components. Data visualization and numerical results provide a clear separation of a set of variables associated with attack types and show that component-dominating parameters are crucial in monitoring future attacks

    Feature Selection in UNSW-NB15 and KDDCUP’99 datasets

    Get PDF
    Machine learning and data mining techniques have been widely used in order to improve network intrusion detection in recent years. These techniques make it possible to automate anomaly detection in network traffics. One of the major problems that researchers are facing is the lack of published data available for research purposes. The KDD’99 dataset was used by researchers for over a decade even though this dataset was suffering from some reported shortcomings and it was criticized by few researchers. In 2009, Tavallaee M. et al. proposed a new dataset (NSL-KDD) extracted from the KDD’99 dataset in order to improve the dataset where it can be used for carrying out research in anomaly detection. The UNSW-NB15 dataset is the latest published dataset which was created in 2015 for research purposes in intrusion detection. This research is analysing the features included in the UNSW-NB15 dataset by employing machine learning techniques and exploring significant features (curse of high dimensionality) by which intrusion detection can be improved in network systems. Therefore, the existing irrelevant and redundant features are omitted from the dataset resulting not only faster training and testing process but also less resource consumption while maintaining high detection rates. A subset of features is proposed in this study and the findings are compared with the previous work in relation to features selection in the KDD’99 dataset
    • …
    corecore