9,524 research outputs found

    Convolutional Neural Networks for Deflate Data Encoding Classification of High Entropy File Fragments

    Get PDF
    Data reconstruction is significantly improved in terms of speed and accuracy by reliable data encoding fragment classification. To date, work on this problem has been successful with file structures of low entropy that contain sparse data, such as large tables or logs. Classifying compressed, encrypted, and random data that exhibit high entropy is an inherently difficult problem that requires more advanced classification approaches. We explore the ability of convolutional neural networks and word embeddings to classify deflate data encoding of high entropy file fragments after establishing ground truth using controlled datasets. Our model is designed to either successfully classify file fragments that contain hidden patterns and high dimensional features, or to gracefully fail if there are no patterns to be recognized. Our experimental results of the model that we built show high accuracy of 99.82%, 99.73%, and 99.6%, when classifying BZ2, PNG, and GZ against JPEG file fragments, respectively

    Approaches to the classification of high entropy file fragments.

    Get PDF
    In this paper we propose novel approaches to the problem of classifying high entropy file fragments. We achieve 97% correct classification for encrypted fragments and 78% for compressed. Although classification of file fragments is central to the science of Digital Forensics, high entropy types have been regarded as a problem. Roussev and Garfinkel [1] argue that existing methods will not work on high entropy fragments because they have no discernible patterns to exploit. We propose two methods that do not rely on such patterns. The NIST statistical test suite is used to detect randomness in 4KB fragments. These test results were analysed using Support Vector Machines, k-Nearest-Neighbour analysis and Artificial Neural Networks (ANN). We compare the performance of each of these analysis methods. Optimum results were obtained using an ANN for analysis giving 94% and 74% correct classification rates for encrypted and compressed fragments respectively. We also use the compressibility of a fragment as a measure of its randomness. Correct classification was 76% and 70% for encrypted and compressed fragments respectively. Although it gave poorer results for encrypted fragments we believe that this method has more potential for future work. We have used subsets of the publicly available GovDocs1 Million File Corpus‘ so that any future research may make valid comparisons with the results obtained here

    Drawing Elena Ferrante's Profile. Workshop Proceedings, Padova, 7 September 2017

    Get PDF
    Elena Ferrante is an internationally acclaimed Italian novelist whose real identity has been kept secret by E/O publishing house for more than 25 years. Owing to her popularity, major Italian and foreign newspapers have long tried to discover her real identity. However, only a few attempts have been made to foster a scientific debate on her work. In 2016, Arjuna Tuzzi and Michele Cortelazzo led an Italian research team that conducted a preliminary study and collected a well-founded, large corpus of Italian novels comprising 150 works published in the last 30 years by 40 different authors. Moreover, they shared their data with a select group of international experts on authorship attribution, profiling, and analysis of textual data: Maciej Eder and Jan Rybicki (Poland), Patrick Juola (United States), Vittorio Loreto and his research team, Margherita Lalli and Francesca Tria (Italy), George Mikros (Greece), Pierre Ratinaud (France), and Jacques Savoy (Switzerland). The chapters of this volume report the results of this endeavour that were first presented during the international workshop Drawing Elena Ferrante's Profile in Padua on 7 September 2017 as part of the 3rd IQLA-GIAT Summer School in Quantitative Analysis of Textual Data. The fascinating research findings suggest that Elena Ferrante\u2019s work definitely deserves \u201cmany hands\u201d as well as an extensive effort to understand her distinct writing style and the reasons for her worldwide success

    Metatranscriptomes from diverse microbial communities: assessment of data reduction techniques for rigorous annotation

    Get PDF
    Background Metatranscriptome sequence data can contain highly redundant sequences from diverse populations of microbes and so data reduction techniques are often applied before taxonomic and functional annotation. For metagenomic data, it has been observed that the variable coverage and presence of closely related organisms can lead to fragmented assemblies containing chimeric contigs that may reduce the accuracy of downstream analyses and some advocate the use of alternate data reduction techniques. However, it is unclear how such data reduction techniques impact the annotation of metatranscriptome data and thus affect the interpretation of the results. Results To investigate the effect of such techniques on the annotation of metatranscriptome data we assess two commonly employed methods: clustering and de-novo assembly. To do this, we also developed an approach to simulate 454 and Illumina metatranscriptome data sets with varying degrees of taxonomic diversity. For the Illumina simulations, we found that a two-step approach of assembly followed by clustering of contigs and unassembled sequences produced the most accurate reflection of the real protein domain content of the sample. For the 454 simulations, the combined annotation of contigs and unassembled reads produced the most accurate protein domain annotations. Conclusions Based on these data we recommend that assembly be attempted, and that unassembled reads be included in the final annotation for metatranscriptome data, even from highly diverse environments as the resulting annotations should lead to a more accurate reflection of the transcriptional behaviour of the microbial population under investigation

    PDF-Malware Detection: A Survey and Taxonomy of Current Techniques

    Get PDF
    Portable Document Format, more commonly known as PDF, has become, in the last 20 years, a standard for document exchange and dissemination due its portable nature and widespread adoption. The flexibility and power of this format are not only leveraged by benign users, but from hackers as well who have been working to exploit various types of vulnerabilities, overcome security restrictions, and then transform the PDF format in one among the leading malicious code spread vectors. Analyzing the content of malicious PDF files to extract the main features that characterize the malware identity and behavior, is a fundamental task for modern threat intelligence platforms that need to learn how to automatically identify new attacks. This paper surveys existing state of the art about systems for the detection of malicious PDF files and organizes them in a taxonomy that separately considers the used approaches and the data analyzed to detect the presence of malicious code. © Springer International Publishing AG, part of Springer Nature 2018

    File Fragment Classification Using Neural Networks with Lossless Representations

    Get PDF
    This study explores the use of neural networks as universal models for classifying file fragments. This approach differs from previous work in its lossless feature representation, with fragments’ bits as direct input, and its use of feedforward, recurrent, and convolutional networks as classifiers, whereas previous work has only tested feedforward networks. Due to the study’s exploratory nature, the models were not directly evaluated in a practical setting; rather, easily reproducible experiments were performed to attempt to answer the initial question of whether this approach is worthwhile to pursue further, especially due to its high computational cost. The experiments tested classification of fragments of homogeneous file types as an idealized case, rather than using a realistic set of types, because the types of interest are highly application-dependent. The recurrent networks achieved 98 percent accuracy in distinguishing 4 file types, suggesting that this approach may be capable of yielding models with sufficient performance for practical applications. The potential applications depend mainly on the model performance gains achievable by future work but include binary mapping, deep packet inspection, and file carving

    SIFT -- File Fragment Classification Without Metadata

    Full text link
    A vital issue of file carving in digital forensics is type classification of file fragments when the filesystem metadata is missing. Over the past decades, there have been several efforts for developing methods to classify file fragments. In this research, a novel sifting approach, named SIFT (Sifting File Types), is proposed. SIFT outperforms the other state-of-the-art techniques by at least 8%. (1) One of the significant differences between SIFT and others is that SIFT uses a single byte as a separate feature, i.e., a total of 256 (0x00 - 0xFF) features. We also call this a lossless feature (information) extraction, i.e., there is no loss of information. (2) The other significant difference is the technique used to estimate inter-Classes and intra-Classes information gain of a feature. Unlike others, SIFT adapts TF-IDF for this purpose, and computes and assigns weight to each byte (feature) in a fragment (sample). With these significant differences and approaches, SIFT produces promising (better) results compared to other works

    Structural alphabets derived from attractors in conformational space

    Get PDF
    Background: The hierarchical and partially redundant nature of protein structures justifies the definition of frequently occurring conformations of short fragments as 'states'. Collections of selected representatives for these states define Structural Alphabets, describing the most typical local conformations within protein structures. These alphabets form a bridge between the string-oriented methods of sequence analysis and the coordinate-oriented methods of protein structure analysis.Results: A Structural Alphabet has been derived by clustering all four-residue fragments of a high-resolution subset of the protein data bank and extracting the high-density states as representative conformational states. Each fragment is uniquely defined by a set of three independent angles corresponding to its degrees of freedom, capturing in simple and intuitive terms the properties of the conformational space. The fragments of the Structural Alphabet are equivalent to the conformational attractors and therefore yield a most informative encoding of proteins. Proteins can be reconstructed within the experimental uncertainty in structure determination and ensembles of structures can be encoded with accuracy and robustness.Conclusions: The density-based Structural Alphabet provides a novel tool to describe local conformations and it is specifically suitable for application in studies of protein dynamics. © 2010 Pandini et al; licensee BioMed Central Ltd
    • 

    corecore