12 research outputs found

    An image representation based convolutional network for DNA classification

    Get PDF
    The folding structure of the DNA molecule combined with helper molecules, also referred to as the chromatin, is highly relevant for the functional properties of DNA. The chromatin structure is largely determined by the underlying primary DNA sequence, though the interaction is not yet fully understood. In this paper we develop a convolutional neural network that takes an image-representation of primary DNA sequence as its input, and predicts key determinants of chromatin structure. The method is developed such that it is capable of detecting interactions between distal elements in the DNA sequence, which are known to be highly relevant. Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time.Comment: Published at ICLR 2018, https://openreview.net/pdf?id=HJvvRoe0

    N-gram analysis of 970 microbial organisms reveals presence of biological language models

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>It has been suggested previously that genome and proteome sequences show characteristics typical of natural-language texts such as "signature-style" word usage indicative of authors or topics, and that the algorithms originally developed for natural language processing may therefore be applied to genome sequences to draw biologically relevant conclusions. Following this approach of 'biological language modeling', statistical n-gram analysis has been applied for comparative analysis of whole proteome sequences of 44 organisms. It has been shown that a few particular amino acid n-grams are found in abundance in one organism but occurring very rarely in other organisms, thereby serving as genome signatures. At that time proteomes of only 44 organisms were available, thereby limiting the generalization of this hypothesis. Today nearly 1,000 genome sequences and corresponding translated sequences are available, making it feasible to test the existence of biological language models over the evolutionary tree.</p> <p>Results</p> <p>We studied whole proteome sequences of 970 microbial organisms using n-gram frequencies and cross-perplexity employing the Biological Language Modeling Toolkit and Patternix Revelio toolkit. Genus-specific signatures were observed even in a simple unigram distribution. By taking statistical n-gram model of one organism as reference and computing cross-perplexity of all other microbial proteomes with it, cross-perplexity was found to be predictive of branch distance of the phylogenetic tree. For example, a 4-gram model from proteome of <it>Shigellae flexneri 2a</it>, which belongs to the <it>Gammaproteobacteria </it>class showed a self-perplexity of 15.34 while the cross-perplexity of other organisms was in the range of 15.59 to 29.5 and was proportional to their branching distance in the evolutionary tree from <it>S. flexneri</it>. The organisms of this genus, which happen to be pathotypes of <it>E.coli</it>, also have the closest perplexity values with <it>E. coli.</it></p> <p>Conclusion</p> <p>Whole proteome sequences of microbial organisms have been shown to contain particular n-gram sequences in abundance in one organism but occurring very rarely in other organisms, thereby serving as proteome signatures. Further it has also been shown that perplexity, a statistical measure of similarity of n-gram composition, can be used to predict evolutionary distance within a genus in the phylogenetic tree.</p

    Genome classification by gene distribution: An overlapping subspace clustering approach

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Genomes of lower organisms have been observed with a large amount of horizontal gene transfers, which cause difficulties in their evolutionary study. Bacteriophage genomes are a typical example. One recent approach that addresses this problem is the unsupervised clustering of genomes based on gene order and genome position, which helps to reveal species relationships that may not be apparent from traditional phylogenetic methods.</p> <p>Results</p> <p>We propose the use of an overlapping subspace clustering algorithm for such genome classification problems. The advantage of subspace clustering over traditional clustering is that it can associate clusters with gene arrangement patterns, preserving genomic information in the clusters produced. Additionally, overlapping capability is desirable for the discovery of multiple conserved patterns within a single genome, such as those acquired from different species via horizontal gene transfers. The proposed method involves a novel strategy to vectorize genomes based on their gene distribution. A number of existing subspace clustering and biclustering algorithms were evaluated to identify the best framework upon which to develop our algorithm; we extended a generic subspace clustering algorithm called HARP to incorporate overlapping capability. The proposed algorithm was assessed and applied on bacteriophage genomes. The phage grouping results are consistent overall with the Phage Proteomic Tree and showed common genomic characteristics among the TP901-like, Sfi21-like and sk1-like phage groups. Among 441 phage genomes, we identified four significantly conserved distribution patterns structured by the terminase, portal, integrase, holin and lysin genes. We also observed a subgroup of Sfi21-like phages comprising a distinctive divergent genome organization and identified nine new phage members to the Sfi21-like genus: <it>Staphylococcus </it>71, phiPVL108, <it>Listeria </it>A118, 2389, <it>Lactobacillus phi </it>AT3, A2, <it>Clostridium </it>phi3626, <it>Geobacillus </it>GBSV1, and <it>Listeria monocytogenes </it>PSA.</p> <p>Conclusion</p> <p>The method described in this paper can assist evolutionary study through objectively classifying genomes based on their resemblance in gene order, gene content and gene positions. The method is suitable for application to genomes with high genetic exchange and various conserved gene arrangement, as demonstrated through our application on phages.</p

    Evolving ribonucleocapsid assembly/packaging signals in the genomes of the human and animal coronaviruses: targeting, transmission and evolution

    Full text link
    A world-wide COVID-19 pandemic intensified strongly the studies of molecular mechanisms related to the coronaviruses. The origin of coronaviruses and the risks of human-to-human, animal-to-human, and human-to-animal transmission of coronaviral infections can be understood only on a broader evolutionary level by detailed comparative studies. In this paper, we studied ribonucleocapsid assembly-packaging signals (RNAPS) in the genomes of all seven known pathogenic human coronaviruses, SARS-CoV, SARS-CoV-2, MERS-CoV, HCoV-OC43, HCoV-HKU1, HCoV-229E, and HCoV-NL63 and compared them with RNAPS in the genomes of the related animal coronaviruses including SARS-Bat-CoV, MERS-Camel-CoV, MHV, Bat-CoV MOP1, TGEV, and one of camel alphacoronaviruses. RNAPS in the genomes of coronaviruses were evolved due to weakly specific interactions between genomic RNA and N proteins in helical nucleocapsids. Combining transitional genome mapping and Jaccard correlation coefficients allows us to perform the analysis directly in terms of underlying motifs distributed over the genome. In all coronaviruses RNAPS were distributed quasi-periodically over the genome with the period about 54 nt biased to 57 nt and to 51 nt for the genomes longer and shorter than that of SARS-CoV, respectively. The comparison with the experimentally verified packaging signals for MERS-CoV, MHV, and TGEV proved that the distribution of particular motifs is strongly correlated with the packaging signals. We also found that many motifs were highly conserved in both characters and positioning on the genomes throughout the lineages that make them promising therapeutic targets. The mechanisms of encapsidation can affect the recombination and co-infection as well.Comment: 40 pages, 12 figure

    Logram: Efficient Log Parsing Using n-Gram Dictionaries

    Full text link
    Software systems usually record important runtime information in their logs. Logs help practitioners understand system runtime behaviors and diagnose field failures. As logs are usually very large in size, automated log analysis is needed to assist practitioners in their software operation and maintenance efforts. Typically, the first step of automated log analysis is log parsing, i.e., converting unstructured raw logs into structured data. However, log parsing is challenging, because logs are produced by static templates in the source code (i.e., logging statements) yet the templates are usually inaccessible when parsing logs. Prior work proposed automated log parsing approaches that have achieved high accuracy. However, as the volume of logs grows rapidly in the era of cloud computing, efficiency becomes a major concern in log parsing. In this work, we propose an automated log parsing approach, Logram, which leverages n-gram dictionaries to achieve efficient log parsing. We evaluated Logram on 16 public log datasets and compared Logram with five state-of-the-art log parsing approaches. We found that Logram achieves a similar parsing accuracy to the best existing approaches while outperforms these approaches in efficiency (i.e., 1.8 to 5.1 times faster than the second fastest approaches). Furthermore, we deployed Logram on Spark and we found that Logram scales out efficiently with the number of Spark nodes (e.g., with near-linear scalability) without sacrificing parsing accuracy. In addition, we demonstrated that Logram can support effective online parsing of logs, achieving similar parsing results and efficiency with the offline mode.Comment: 13 pages, IEEE journal forma

    Online Defect Prediction for Imbalanced Data

    Get PDF
    Many defect prediction techniques are proposed to improve software reliability. Change classification predicts defects at the change level, where a change is a collection of the modifications to one file in a commit. In this thesis, we conduct the first study of applying change classification in practice and share the lessons we learned. We identify two issues in the prediction process, both of which contribute to the low prediction performance. First, the data are imbalanced—there are much fewer buggy changes than clean changes. Second, the commonly used cross-validation approach is inappropriate for evaluating the performance of change classification. To address these challenges, we apply and adapt online change classification to evaluate the prediction and use resampling, updatable classification techniques as well as remove the testing-related changes to improve the classification performance. We perform the improved change classification techniques on one proprietary and six open source projects. Our results show that resampling and updatable classification techniques improve the precision of change classification by 12.2–89.5% or 6.4–34.8 percentage points (pp.) on the seven projects. Additionally, removing testing-related changes improves F1 by 62.2–3411.1% or 19.4–61.4 pp. on the six open source projects with a comparable value of precision achieved. Furthermore, we integrate change classification in the development process of the proprietary project. We have learned the following lessons: 1 ) new solutions are needed to convince developers to use and believe prediction results, and prediction results need to be actionable, 2 ) new and improved classification algorithms are needed to explain the prediction results, and insensible and unactionable explanations need to be filtered or refined, and 3 ) new techniques are needed to improve the relatively low precision

    Técnicas de aprendizaje automático aplicadas a la mejora de detección de ataques en aplicaciones web

    Get PDF
    Los portales de aplicaciones y servicios web suelen ser una de las puertas de entrada para el lanzamiento de ataques y otros tipos de actividades malintencionadas contra empresas y diversos tipos de entidades. Desde bancos a webs de comercio electrónico, pasando por las infraestructuras de sistemas sanitarios, sistema judicial, etc., los posibles perjuicios económicos, reputacionales, de fuga de información y de otra índole ocasionados no solo a las organizaciones, sino también a los usuarios legítimos de las aplicaciones y servicios web por un ataque, son incalculables. En un afán de proporcionar una capa de protección adicional contra este tipo de ataques, se ha investigado abundantemente sobre técnicas de protección web: desde un enfoque más clásico basado en reglas de protección que deben actualizarse constantemente hasta las técnicas basadas en la detección de anomalías, el número de estudios Con esta tesis, se pretende contribuir a afianzar el conocimiento sobre las técnicas de detección de anomalías mediante tres artículos en los que se aporta conocimiento a la comunidad científica mediante la primera revisión sistemática de literatura de las técnicas de detección de anomalías aplicadas a la protección de aplicaciones web. Posteriormente se plantea una nueva metodología para la comparación objetiva de herramientas de protección web, demostrando su aplicabilidad mediante la comparación de diversas herramientas WAF y RASP. Por último, se facilita a la comunidad científica un nuevo dataset multietiqueta con el que se entrenan nuevos diseños de modelos de clasificación capaces de identificar los ataques web mediante patrones de ataque CAPEC

    Classificação de dados biológicos : características e classificadores

    Get PDF
    Reconhecendo a importância que o estudo das proteínas desempenha para a compreensão de inúmeros sistemas biológicos, este trabalho tem por objetivo analisar e explorar a efetividade da utilização de técnicas de data mining para classificação de proteínas, aplicadas ao caso de estudo da deteção de peptidases. A metodologia apresentada e avaliada é baseada em técnicas de text mining aplicadas à estrutura primária das proteínas, conjugadas com algoritmos de classificação supervisionada. São apresentados resultados para os algoritmos baseados em máquinas de vetor de suporte, nomeadamente C-SVC, One-Class e LASVM (incremental). Para o caso de estudo da deteção de peptidases, o algoritmo que apresentou melhores resultados foi o C-SVC. A utilização do algoritmo One-Class apresentou uma diminuição da capacidade de deteção de peptidases relativamente ao C-SVC. Apesar disso, o algoritmo One-Class pode ser uma solução de compromisso quando só são conhecidos exemplos positivos. Através da utilização do algoritmo incremental LASVM, conseguiram-se resultados muito próximos do C-SVC. Contudo, não foi possível superá-los, mas os resultados obtidos apresentam ganhos significativos ao nível do tempo de treino e da complexidade dos modelos gerados, tornando-se um algoritmo bastante válido para aplicar a problemas que disponham de uma grande quantidade de exemplos de treino. Além da análise e avaliação dos algoritmos, foi também elaborada uma plataforma web, “Bioink Search”, que permite aplicar as metodologias descritas para a deteção de peptidases
    corecore