5 research outputs found

    Hardcore measures, dense models and low complexity approximations

    Get PDF
    Continuing the study of connections amongst Dense Model Theorem, Low Complexity Approximation Theorem and Hardcore Lemma initiated by Trevisan et al. [TTV09], this thesis builds on the work of Barak et al., Impagliazzo, Reingold et al. and Zhang [BHK09, Imp09, RTTV08a, Zha11] to show the essential equivalence of these three results. The first main result obtained here is a reduction from any of the standard black-box Dense Models Theorems to the Low Complexity Approximation Theorem. The next is the extension of Impagliazzo\u27s reduction from Strong Hardcore Lemma to Dense Model Theorem. Then using Zhang\u27s Dense Model Theorem algorithm we reduce Weak Hardcore Lemma to Strong Hardcore Lemma. Last we distill the methods of Barak et al. and Zhang to extract a single algorithm which yields uniform constructions for all three. Putting all this together demonstrates the three results are essentially equivalent

    A collaborative filtering-based approach to biomedical knowledge discovery

    No full text
    Motivation: The increase in publication rates makes it challenging for an individual researcher to stay abreast of all relevant research in order to find novel research hypotheses. Literature-based discovery methods make use of knowledge graphs built using text mining and can infer future associations between biomedical concepts that will likely occur in new publications. These predictions are a valuable resource for researchers to explore a research topic. Current methods for prediction are based on the local structure of the knowledge graph. A method that uses global knowledge from across the knowledge graph needs to be developed in order to make knowledge discovery a frequently used tool by researchers. Results: We propose an approach based on the singular value decomposition (SVD) that is able to combine data from across the knowledge graph through a reduced representation. Using cooccurrence data extracted from published literature, we show that SVD performs better than the leading methods for scoring discoveries. We also show the diminishing predictive power of knowledge discovery as we compare our predictions with real associations that appear further into the future. Finally, we examine the strengths and weaknesses of the SVD approach against another well-performing system using several predicted associations. Availability and implementation: All code and results files for this analysis can be accessed at https://github.com/jakelever/knowledgediscovery. Supplementary information: Supplementary data are available at Bioinformatics online

    A collaborative filtering-based approach to biomedical knowledge discovery

    No full text
    Motivation: The increase in publication rates makes it challenging for an individual researcher to stay abreast of all relevant research in order to find novel research hypotheses. Literature-based discovery methods make use of knowledge graphs built using text mining and can infer future associations between biomedical concepts that will likely occur in new publications. These predictions are a valuable resource for researchers to explore a research topic. Current methods for prediction are based on the local structure of the knowledge graph. A method that uses global knowledge from across the knowledge graph needs to be developed in order to make knowledge discovery a frequently used tool by researchers. Results: We propose an approach based on the singular value decomposition (SVD) that is able to combine data from across the knowledge graph through a reduced representation. Using cooccurrence data extracted from published literature, we show that SVD performs better than the leading methods for scoring discoveries. We also show the diminishing predictive power of knowledge discovery as we compare our predictions with real associations that appear further into the future. Finally, we examine the strengths and weaknesses of the SVD approach against another well-performing system using several predicted associations. Availability and implementation: All code and results files for this analysis can be accessed at https://github.com/jakelever/knowledgediscovery. Supplementary information: Supplementary data are available at Bioinformatics online

    The International Human Epigenome Consortium: A Blueprint for Scientific Collaboration and Discovery.

    No full text
    The International Human Epigenome Consortium (IHEC) coordinates the generation of a catalog of high-resolution reference epigenomes of major primary human cell types. The studies now presented (see the Cell Press IHEC web portal at http://www.cell.com/consortium/IHEC) highlight the coordinated achievements of IHEC teams to gather and interpret comprehensive epigenomic datasets to gain insights in the epigenetic control of cell states relevant for human health and disease. PAPERCLIP
    corecore