1,107 research outputs found

    Conclave: Writing programs to understand programs

    Get PDF
    Software maintainers are often challenged with source code changes to improve software systems, or eliminate defects, in unfamiliar programs. To undertake these tasks a sufficient understanding of the system, or at least a small part of it, is required. One of the most time consuming tasks of this process is locating which parts of the code are responsible for some key functionality or feature. This paper introduces Conclave, an environment for software analysis, that enhances program comprehension activities. Programmers use natural languages to describe and discuss the problem domain, programming languages to write source code, and markup languages to have programs talking with other programs, and so this system has to cope with this heterogeneity of dialects, and provide tools in all these areas to effectively contribute to the understanding process. The source code, the problem domain, and the side effects of running the program are represented in the system using ontologies. A combination of tools (specialized in different kinds of languages) create mappings between the different domains. Conclave provides facilities for feature location, code search, and views of the software that ease the process of understanding the code, devising changes. The underlying feature location technique explores natural language terms used in programs (e.g. function and variable names); using textual analysis and a collection of Natural Language Processing techniques, computes synonymous sets of terms. These sets are used to score relatedness between program elements, and search queries or problem domain concepts, producing sorted ranks of program elements that address the search criteria, or concepts respectively. © Nuno Ramos Carvalho, José João Almeida, Maria João Varanda Pereira, and Pedro Rangel Henriques.(undefined)info:eu-repo/semantics/publishedVersio

    HSPVdb—the Human Short Peptide Variation Database for improved mass spectrometry-based detection of polymorphic HLA-ligands

    Get PDF
    T cell epitopes derived from polymorphic proteins or from proteins encoded by alternative reading frames (ARFs) play an important role in (tumor) immunology. Identification of these peptides is successfully performed with mass spectrometry. In a mass spectrometry-based approach, the recorded tandem mass spectra are matched against hypothetical spectra generated from known protein sequence databases. Commonly used protein databases contain a minimal level of redundancy, and thus, are not suitable data sources for searching polymorphic T cell epitopes, either in normal or ARFs. At the same time, however, these databases contain much non-polymorphic sequence information, thereby complicating the matching of recorded and theoretical spectra, and increasing the potential for finding false positives. Therefore, we created a database with peptides from ARFs and peptide variation arising from single nucleotide polymorphisms (SNPs). It is based on the human mRNA sequences from the well-annotated reference sequence (RefSeq) database and associated variation information derived from the Single Nucleotide Polymorphism Database (dbSNP). In this process, we removed all non-polymorphic information. Investigation of the frequency of SNPs in the dbSNP revealed that many SNPs are non-polymorphic “SNPs”. Therefore, we removed those from our dedicated database, and this resulted in a comprehensive high quality database, which we coined the Human Short Peptide Variation Database (HSPVdb). The value of our HSPVdb is shown by identification of the majority of published polymorphic SNP- and/or ARF-derived epitopes from a mass spectrometry-based proteomics workflow, and by a large variety of polymorphic peptides identified as potential T cell epitopes in the HLA-ligandome presented by the Epstein–Barr virus cells

    Reviewing Privacy In an Information Society

    Get PDF

    Systems level investigation of the genetic basis of bovine muscle growth and development

    Get PDF
    Skeletal muscle growth is an economically and biologically important trait for livestock raised for meat production. As such, there is great interest in understanding the underlying genomic architecture influencing muscle growth and development. In spite of this, relatively little is known about the genes or biological processes regulating bovine muscle growth. In this thesis, several approaches were undertaken in order to elucidate some of the mechanisms which may be controlling bovine muscle growth and development. The first objective of this thesis was the development of a novel software tool (SNPdat) for the rapid and comprehensive annotation of SNP data for any organism with a draft sequence and annotation. SNPdat was subsequently utilised in chapters 3 and 6 to facilitate the identification of candidate genes and regions involved in bovine muscle growth. In chapter 4, a number of metrics were explored for their usefulness in assessing convergence of a Markov Chain using a Bayesian approach used in genetic prediction. The need to adequately assess convergence using multiple metrics is addressed and recommendations put forward. These recommendations were then implemented in chapter 3. In addition, three separate investigations of bovine muscle growth and development were performed. In chapter 3, a genome-wide association study was performed to identify regions of the bovine genome associated with four economically important carcass traits. This was followed by an examination of the transcriptional responses in muscle tissue of animals undergoing dietary restriction and compensatory growth (chapter 5). Finally, using high-throughput DNA sequencing, a candidate list of 200 genes was interrogated to identify genes which may be evolving at different rates, and under evolutionary selection pressure, in beef compared to dairy animals (chapter 6). A number of genes and biological pathways were found to be involved in traits related to bovine muscle growth, several of which were identified in more than one study

    Knowledge Management approaches to model pathophysiological mechanisms and discover drug targets in Multiple Sclerosis

    Get PDF
    Multiple Sclerosis (MS) is one of the most prevalent neurodegenerative diseases for which a cure is not yet available. MS is a complex disease for numerous reasons; its etiology is unknown, the diagnosis is not exclusive, the disease course is unpredictable and therapeutic response varies from patient to patient. There are four established subtypes of MS, which are segregated based on different characteristics. Many environmental and genetic factors are considered to play a role in MS etiology, including viral infection, vitamin D deficiency, epigenetical changes and some genes. Despite the large body of diverse scientific knowledge, from laboratory findings to clinical trials, no integrated model which portrays the underlying mechanisms of the disease state of MS is available. Contemporary therapies only provide reduction in the severity of the disease, and there is an unmet need of efficient drugs. The present thesis provides a knowledge-based rationale to model MS disease mechanisms and identify potential drug candidates by using systems biology approaches. Systems biology is an emerging field which utilizes the computational methods to integrate datasets of various granularities and simulate the disease outcome. It provides a framework to model molecular dynamics with their precise interaction and contextual details. The proposed approaches were used to extract knowledge from literature by state of the art text mining technologies, integrate it with proprietary data using semantic platforms, and build different models (molecular interactions map, agent based models to simulate disease outcome, and MS disease progression model with respect to time). For better information representation, disease ontology was also developed and a methodology of automatic enrichment was derived. The models provide an insight into the disease, and several pathways were explored by combining the therapeutics and the disease-specific prescriptions. The approaches and models developed in this work resulted in the identification of novel drug candidates that are backed up by existing experimental and clinical knowledge

    Reconstructing Textual File Fragments Using Unsupervised Machine Learning Techniques

    Get PDF
    This work is an investigation into reconstructing fragmented ASCII files based on content analysis motivated by a desire to demonstrate machine learning\u27s applicability to Digital Forensics. Using a categorized corpus of Usenet, Bulletin Board Systems, and other assorted documents a series of experiments are conducted using machine learning techniques to train classifiers which are able to identify fragments belonging to the same original file. The primary machine learning method used is the Support Vector Machine with a variety of feature extractions to train from. Additional work is done in training committees of SVMs to boost the classification power over the individual SVMs, as well as the development of a method to tune SVM kernel parameters using a genetic algorithm. Attention is given to the applicability of Information Retrieval techniques to file fragments, as well as an analysis of textual artifacts which are not present in standard dictionaries

    Report on shape analysis and matching and on semantic matching

    No full text
    In GRAVITATE, two disparate specialities will come together in one working platform for the archaeologist: the fields of shape analysis, and of metadata search. These fields are relatively disjoint at the moment, and the research and development challenge of GRAVITATE is precisely to merge them for our chosen tasks. As shown in chapter 7 the small amount of literature that already attempts join 3D geometry and semantics is not related to the cultural heritage domain. Therefore, after the project is done, there should be a clear ‘before-GRAVITATE’ and ‘after-GRAVITATE’ split in how these two aspects of a cultural heritage artefact are treated.This state of the art report (SOTA) is ‘before-GRAVITATE’. Shape analysis and metadata description are described separately, as currently in the literature and we end the report with common recommendations in chapter 8 on possible or plausible cross-connections that suggest themselves. These considerations will be refined for the Roadmap for Research deliverable.Within the project, a jargon is developing in which ‘geometry’ stands for the physical properties of an artefact (not only its shape, but also its colour and material) and ‘metadata’ is used as a general shorthand for the semantic description of the provenance, location, ownership, classification, use etc. of the artefact. As we proceed in the project, we will find a need to refine those broad divisions, and find intermediate classes (such as a semantic description of certain colour patterns), but for now the terminology is convenient – not least because it highlights the interesting area where both aspects meet.On the ‘geometry’ side, the GRAVITATE partners are UVA, Technion, CNR/IMATI; on the metadata side, IT Innovation, British Museum and Cyprus Institute; the latter two of course also playing the role of internal users, and representatives of the Cultural Heritage (CH) data and target user’s group. CNR/IMATI’s experience in shape analysis and similarity will be an important bridge between the two worlds for geometry and metadata. The authorship and styles of this SOTA reflect these specialisms: the first part (chapters 3 and 4) purely by the geometry partners (mostly IMATI and UVA), the second part (chapters 5 and 6) by the metadata partners, especially IT Innovation while the joint overview on 3D geometry and semantics is mainly by IT Innovation and IMATI. The common section on Perspectives was written with the contribution of all
    corecore