1,602 research outputs found

    Mapping bilateral information interests using the activity of Wikipedia editors

    Full text link
    We live in a global village where electronic communication has eliminated the geographical barriers of information exchange. The road is now open to worldwide convergence of information interests, shared values, and understanding. Nevertheless, interests still vary between countries around the world. This raises important questions about what today's world map of in- formation interests actually looks like and what factors cause the barriers of information exchange between countries. To quantitatively construct a world map of information interests, we devise a scalable statistical model that identifies countries with similar information interests and measures the countries' bilateral similarities. From the similarities we connect countries in a global network and find that countries can be mapped into 18 clusters with similar information interests. Through regression we find that language and religion best explain the strength of the bilateral ties and formation of clusters. Our findings provide a quantitative basis for further studies to better understand the complex interplay between shared interests and conflict on a global scale. The methodology can also be extended to track changes over time and capture important trends in global information exchange.Comment: 11 pages, 3 figures in Palgrave Communications 1 (2015

    COMPUTATIONAL DRUG REPURPOSING FOR BREAST CANCER SUBTYPES

    Get PDF
    Breast cancer makes up 25 percent of all new cancer diagnoses globally according to the American Cancer Society(ACS). Developing a highly effective drug can be a time consuming and an expensive ordeal. Drug repurposing is a tremendous approach which takes away some disadvantages of traditional drug development procedures making it both time and cost effective. In this thesis, we are interested in finding good drugs for each of the ten subtypes of breast cancer. Repurposing incorporates identifying unique indications of pre-approved drugs and utilizing them to observe the anti-correlation between the perturbation data and disease data. If anti-correlation, whether it is up-regulation or down-regulation, is detected, it indicates that those drugs cause an effect making them a suitable candidate for drug repurposing. The gene expression data and the discrete copy number variation data will be used to compute z-scores and normalize the data for ten sets of disease subtypes. Gene expression data for ten subtypes was extracted from the METABRIC dataset. We have extracted values corresponding to MCF7 cell line from the pharmacogenomics perturbation data which is the National Institute of Health\u27s (NIH) Library of Integrated Network-Based Cellular Signatures (LINCS) dataset. We have used our proposed clustering methods to select the best suited drug candidates per subtype. We have obtained a ranked list of suitable drug repurposing and repositioning candidates for each of the 10 breast cancer subtypes

    Enriching ontological user profiles with tagging history for multi-domain recommendations

    Get PDF
    Many advanced recommendation frameworks employ ontologies of various complexities to model individuals and items, providing a mechanism for the expression of user interests and the representation of item attributes. As a result, complex matching techniques can be applied to support individuals in the discovery of items according to explicit and implicit user preferences. Recently, the rapid adoption of Web2.0, and the proliferation of social networking sites, has resulted in more and more users providing an increasing amount of information about themselves that could be exploited for recommendation purposes. However, the unification of personal information with ontologies using the contemporary knowledge representation methods often associated with Web2.0 applications, such as community tagging, is a non-trivial task. In this paper, we propose a method for the unification of tags with ontologies by grounding tags to a shared representation in the form of Wordnet and Wikipedia. We incorporate individuals' tagging history into their ontological profiles by matching tags with ontology concepts. This approach is preliminary evaluated by extending an existing news recommendation system with user tagging histories harvested from popular social networking sites

    False News On Social Media: A Data-Driven Survey

    Full text link
    In the past few years, the research community has dedicated growing interest to the issue of false news circulating on social networks. The widespread attention on detecting and characterizing false news has been motivated by considerable backlashes of this threat against the real world. As a matter of fact, social media platforms exhibit peculiar characteristics, with respect to traditional news outlets, which have been particularly favorable to the proliferation of deceptive information. They also present unique challenges for all kind of potential interventions on the subject. As this issue becomes of global concern, it is also gaining more attention in academia. The aim of this survey is to offer a comprehensive study on the recent advances in terms of detection, characterization and mitigation of false news that propagate on social media, as well as the challenges and the open questions that await future research on the field. We use a data-driven approach, focusing on a classification of the features that are used in each study to characterize false information and on the datasets used for instructing classification methods. At the end of the survey, we highlight emerging approaches that look most promising for addressing false news

    Implications of storage subsystem interactions on processing efficiency in data intensive computing

    Get PDF
    Includes bibliographical references.2015 Fall.Processing frameworks such as MapReduce allow development of programs that operate on voluminous on-disk data. These frameworks typically include support for multiple file/storage subsystems. This decoupling of processing frameworks from the underlying storage subsystem provides a great deal of flexibility in application development. However, as we demonstrate, this flexibility often exacts a price: performance. Given the data volumes, storage subsystems (such as HDFS, MongoDB, and HBase) disperse datasets over a collection of machines. Storage subsystems manage complexity relating to preservation of consistency, redundancy, failure recovery, throughput, and load balancing. Preserving these properties involve message exchanges between distributed subsystem components, updates to in-memory data structures, data movements, and coordination as datasets are staged and system conditions change. Storage subsystems prioritize these properties differently, leading to vastly different network, disk, memory, and CPU footprints for staging and accessing the same dataset. This thesis proposes a methodology for comparing and identifying the storage subsystem suited for the processing that is being performed on a dataset. We profile the network I/O, disk I/O, memory, and CPU costs introduced by a storage subsystem during data staging, data processing, and generation of results. We perform this analysis with different storage subsystems and applications with different disk-I/O to CPU processing ratios

    Changing Higher Education Learning with Web 2.0 and Open Education Citation, Annotation, and Thematic Coding Appendices

    Get PDF
    Appendices of citations, annotations and themes for research conducted on four websites: Delicious, Wikipedia, YouTube, and Facebook
    corecore