25,923 research outputs found

    Notes on a Geography of knowledge

    Get PDF
    Law and knowledge jointly occupy a metaphorical landscape. Understanding that landscape is essential to understanding the full complexity of knowledge law. This Article identifies some landmarks in that landscape, which it identifies as forms of legal practice: several recent cases involving intellectual property licenses, including the recent patent law decision in Quanta v. LG Electronics and the open source licensing decision in Jacobsen v. Katzer. The Article offers a preliminary framework for exploring the territories of knowledge practice in which those legal landmarks appear

    Health Misinformation in Search and Social Media

    Get PDF
    People increasingly rely on the Internet in order to search for and share health-related information. Indeed, searching for and sharing information about medical treatments are among the most frequent uses of online data. While this is a convenient and fast method to collect information, online sources may contain incorrect information that has the potential to cause harm, especially if people believe what they read without further research or professional medical advice. The goal of this thesis is to address the misinformation problem in two of the most commonly used online services: search engines and social media platforms. We examined how people use these platforms to search for and share health information. To achieve this, we designed controlled laboratory user studies and employed large-scale social media data analysis tools. The solutions proposed in this thesis can be used to build systems that better support people's health-related decisions. The techniques described in this thesis addressed online searching and social media sharing in the following manner. First, with respect to search engines, we aimed to determine the extent to which people can be influenced by search engine results when trying to learn about the efficacy of various medical treatments. We conducted a controlled laboratory study wherein we biased the search results towards either correct or incorrect information. We then asked participants to determine the efficacy of different medical treatments. Results showed that people were significantly influenced both positively and negatively by search results bias. More importantly, when the subjects were exposed to incorrect information, they made more incorrect decisions than when they had no interaction with the search results. Following from this work, we extended the study to gain insights into strategies people use during this decision-making process, via the think-aloud method. We found that, even with verbalization, people were strongly influenced by the search results bias. We also noted that people paid attention to what the majority states, authoritativeness, and content quality when evaluating online content. Understanding the effects of cognitive biases that can arise during online search is a complex undertaking because of the presence of unconscious biases (such as the search results ranking) that the think-aloud method fails to show. Moving to social media, we first proposed a solution to detect and track misinformation in social media. Using Zika as a case study, we developed a tool for tracking misinformation on Twitter. We collected 13 million tweets regarding the Zika outbreak and tracked rumors outlined by the World Health Organization and the Snopes fact-checking website. We incorporated health professionals, crowdsourcing, and machine learning to capture health-related rumors as well as clarification communications. In this way, we illustrated insights that the proposed tools provide into potentially harmful information on social media, allowing public health researchers and practitioners to respond with targeted and timely action. From identifying rumor-bearing tweets, we examined individuals on social media who are posting questionable health-related information, in particular those promoting cancer treatments that have been shown to be ineffective. Specifically, we studied 4,212 Twitter users who have posted about one of 139 ineffective ``treatments'' and compared them to a baseline of users generally interested in cancer. Considering features that capture user attributes, writing style, and sentiment, we built a classifier that is able to identify users prone to propagating such misinformation. This classifier achieved an accuracy of over 90%, providing a potential tool for public health officials to identify such individuals for preventive intervention

    Identifying experts and authoritative documents in social bookmarking systems

    Get PDF
    Social bookmarking systems allow people to create pointers to Web resources in a shared, Web-based environment. These services allow users to add free-text labels, or “tags”, to their bookmarks as a way to organize resources for later recall. Ease-of-use, low cognitive barriers, and a lack of controlled vocabulary have allowed social bookmaking systems to grow exponentially over time. However, these same characteristics also raise concerns. Tags lack the formality of traditional classificatory metadata and suffer from the same vocabulary problems as full-text search engines. It is unclear how many valuable resources are untagged or tagged with noisy, irrelevant tags. With few restrictions to entry, annotation spamming adds noise to public social bookmarking systems. Furthermore, many algorithms for discovering semantic relations among tags do not scale to the Web. Recognizing these problems, we develop a novel graph-based Expert and Authoritative Resource Location (EARL) algorithm to find the most authoritative documents and expert users on a given topic in a social bookmarking system. In EARL’s first phase, we reduce noise in a Delicious dataset by isolating a smaller sub-network of “candidate experts”, users whose tagging behavior shows potential domain and classification expertise. In the second phase, a HITS-based graph analysis is performed on the candidate experts’ data to rank the top experts and authoritative documents by topic. To identify topics of interest in Delicious, we develop a distributed method to find subsets of frequently co-occurring tags shared by many candidate experts. We evaluated EARL’s ability to locate authoritative resources and domain experts in Delicious by conducting two independent experiments. The first experiment relies on human judges’ n-point scale ratings of resources suggested by three graph-based algorithms and Google. The second experiment evaluated the proposed approach’s ability to identify classification expertise through human judges’ n-point scale ratings of classification terms versus expert-generated data

    Expertise and public policy: a conceptual guide

    Get PDF
    This paper seeks to provide a guide to better understand: what is expertise, how to determine who are the relevant experts where it comes to the technical aspects of public policy debates, and how to go about choosing between competing expert claims. Executive summary: In developing policy and assessing program effectiveness, policy makers are required to make decisions on complex issues in areas that involve significant public risks. In this context, policy makers are becoming more reliant on the advice of experts and the institution of expertise. Expert knowledge and advice in fields as diverse as science, engineering, the law and economics is required to assist policy makers in their deliberations on complex matters of public policy and to provide them with an authoritative basis for legitimate decision making. However, at the same time that reliance on expertise and the demands made of it are increasing, expert claims have never been subject to greater levels of questioning and criticism. This problem is compounded by the growing public demand that non-experts should be able to participate in debates over issues that impact on their lives. However their capacity to understand and contribute to the technical aspects of these debates may be either limited or non-existent. This paper provides a guide to assessing who is and who is not an expert in the technical aspects of public policy debates, by providing a framework of levels of expertise. It also notes the importance of identifying the specific fields of expertise relevant to the issue in question. The main focus is on scientific and technical areas, but the issues raised also apply in other domains. It then examines the problem of how non-experts can evaluate expert claims in complex, technical domains. The paper argues that, in the absence of the necessary technical expertise, the only way that non-experts are able to appraise expertise and expert claims is through the use of social expertise. This is expertise using everyday social judgements that enables them to determine who to believe when they are not in a position to judge what to believe. In this context, the paper suggests policy makers ask a series of questions: –      can I make sense of the arguments? –      which experts seems the more credible? –      who has the numbers on their side? –      are there any relevant interests or biases? And –      what are the experts’ track records? By identifying the strengths and limitations of each of these strategies, the paper provides guidance on how each might best be used. It also argues that using them in combination improves their strength and reliability. The role of those who can act as intermediaries between technical experts and non-experts is also examined. The paper makes clear that none of these strategies are without problems, but it postulates that a more systematic approach to how non-experts use social expertise might enhance their ability to become active rather than passive consumers of technical expertise

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    A Survey on Data-Driven Evaluation of Competencies and Capabilities Across Multimedia Environments

    Get PDF
    The rapid evolution of technology directly impacts the skills and jobs needed in the next decade. Users can, intentionally or unintentionally, develop different skills by creating, interacting with, and consuming the content from online environments and portals where informal learning can emerge. These environments generate large amounts of data; therefore, big data can have a significant impact on education. Moreover, the educational landscape has been shifting from a focus on contents to a focus on competencies and capabilities that will prepare our society for an unknown future during the 21st century. Therefore, the main goal of this literature survey is to examine diverse technology-mediated environments that can generate rich data sets through the users’ interaction and where data can be used to explicitly or implicitly perform a data-driven evaluation of different competencies and capabilities. We thoroughly and comprehensively surveyed the state of the art to identify and analyse digital environments, the data they are producing and the capabilities they can measure and/or develop. Our survey revealed four key multimedia environments that include sites for content sharing & consumption, video games, online learning and social networks that fulfilled our goal. Moreover, different methods were used to measure a large array of diverse capabilities such as expertise, language proficiency and soft skills. Our results prove the potential of the data from diverse digital environments to support the development of lifelong and lifewide 21st-century capabilities for the future society
    • 

    corecore