810 research outputs found

    Concept graphs: Applications to biomedical text categorization and concept extraction

    Get PDF
    As science advances, the underlying literature grows rapidly providing valuable knowledge mines for researchers and practitioners. The text content that makes up these knowledge collections is often unstructured and, thus, extracting relevant or novel information could be nontrivial and costly. In addition, human knowledge and expertise are being transformed into structured digital information in the form of vocabulary databases and ontologies. These knowledge bases hold substantial hierarchical and semantic relationships of common domain concepts. Consequently, automating learning tasks could be reinforced with those knowledge bases through constructing human-like representations of knowledge. This allows developing algorithms that simulate the human reasoning tasks of content perception, concept identification, and classification. This study explores the representation of text documents using concept graphs that are constructed with the help of a domain ontology. In particular, the target data sets are collections of biomedical text documents, and the domain ontology is a collection of predefined biomedical concepts and relationships among them. The proposed representation preserves those relationships and allows using the structural features of graphs in text mining and learning algorithms. Those features emphasize the significance of the underlying relationship information that exists in the text content behind the interrelated topics and concepts of a text document. The experiments presented in this study include text categorization and concept extraction applied on biomedical data sets. The experimental results demonstrate how the relationships extracted from text and captured in graph structures can be used to improve the performance of the aforementioned applications. The discussed techniques can be used in creating and maintaining digital libraries through enhancing indexing, retrieval, and management of documents as well as in a broad range of domain-specific applications such as drug discovery, hypothesis generation, and the analysis of molecular structures in chemoinformatics

    Accelerating Innovation for Development: Evaluation Report

    Get PDF
    The Accelerating Innovation for Development Initiative of the Rockefeller Foundation was a US$16.5 million effort approved in 2007 aimed at:1. Identifying and demonstrating that open and user-driven innovation models are effective and efficient innovation processes for the needs of the poor; and2. Significantly increasing the application of these models to meet the needs of the poorThe evaluation covers the grantmaking and non-grant work of the Initiative from 2007-2009 on open, user centered, and user led innovation. The evaluation was conducted from July 2011 to February 2012 by an independent evaluation team. The purposes of the evaluation of the Innovation Initiative relate to informing other Rockefeller Foundation initiatives and the work of Foundation grantees and partners; demonstrating accountability for funds spent under the Initiative; and contributing knowledge to the field as a public good

    Creative and cultural spillovers : an e-Compendium of project publications (2015-2018)

    Get PDF
    This e-Compendium is a compilation of the European Research Partnership on Cultural and Creative Spillovers project research publications, which were all made publicly-available as PDF downloads from the website for the duration of the project. This e-Compendium ensures the continued access and ease of distribution of these documents: each individual document maintains its stated authorship and copyright designations

    Linked Data Supported Information Retrieval

    Get PDF
    Um Inhalte im World Wide Web ausfindig zu machen, sind Suchmaschienen nicht mehr wegzudenken. Semantic Web und Linked Data Technologien ermöglichen ein detaillierteres und eindeutiges Strukturieren der Inhalte und erlauben vollkommen neue Herangehensweisen an die Lösung von Information Retrieval Problemen. Diese Arbeit befasst sich mit den Möglichkeiten, wie Information Retrieval Anwendungen von der Einbeziehung von Linked Data profitieren können. Neue Methoden der computer-gestützten semantischen Textanalyse, semantischen Suche, Informationspriorisierung und -visualisierung werden vorgestellt und umfassend evaluiert. Dabei werden Linked Data Ressourcen und ihre Beziehungen in die Verfahren integriert, um eine Steigerung der Effektivität der Verfahren bzw. ihrer Benutzerfreundlichkeit zu erzielen. Zunächst wird eine Einführung in die Grundlagen des Information Retrieval und Linked Data gegeben. Anschließend werden neue manuelle und automatisierte Verfahren zum semantischen Annotieren von Dokumenten durch deren Verknüpfung mit Linked Data Ressourcen vorgestellt (Entity Linking). Eine umfassende Evaluation der Verfahren wird durchgeführt und das zu Grunde liegende Evaluationssystem umfangreich verbessert. Aufbauend auf den Annotationsverfahren werden zwei neue Retrievalmodelle zur semantischen Suche vorgestellt und evaluiert. Die Verfahren basieren auf dem generalisierten Vektorraummodell und beziehen die semantische Ähnlichkeit anhand von taxonomie-basierten Beziehungen der Linked Data Ressourcen in Dokumenten und Suchanfragen in die Berechnung der Suchergebnisrangfolge ein. Mit dem Ziel die Berechnung von semantischer Ähnlichkeit weiter zu verfeinern, wird ein Verfahren zur Priorisierung von Linked Data Ressourcen vorgestellt und evaluiert. Darauf aufbauend werden Visualisierungstechniken aufgezeigt mit dem Ziel, die Explorierbarkeit und Navigierbarkeit innerhalb eines semantisch annotierten Dokumentenkorpus zu verbessern. Hierfür werden zwei Anwendungen präsentiert. Zum einen eine Linked Data basierte explorative Erweiterung als Ergänzung zu einer traditionellen schlüsselwort-basierten Suchmaschine, zum anderen ein Linked Data basiertes Empfehlungssystem

    Economic Contribution of Cultural Industries: Evidence from some Selected Countries

    Get PDF
    Cultural industries have become a significant component of modern economies There is increasing attention measuring the economic contribution of these industries at national levels particularly their impact on economic variables The objective of this study is to illustrate concepts approaches methodologies related to cultural economics Moreover to shed light on measuring methods of the economic contribution of cultural industries Using descriptive analysis we examined the use of these approaches in some selected countries These countries are UK Finland France Germany Italy and Spain from Europe Canada and USA from North America Australia China and India from the Asia-pacific region South American economic organization MERCOSUR countries for the South American region South Africa and Egypt from Africa region The main results revealed the increasing realization of measuring cultural economic contributions in developed countries rather than developing countries Yet data limitation is still the main problem of measuring the economic contribution of cultural industries Furthermore for international comparison purposes there is a real need to develop new common concepts and measurements of the economic contribution of cultural industrie

    Identification of Informativeness in Text using Natural Language Stylometry

    Get PDF
    In this age of information overload, one experiences a rapidly growing over-abundance of written text. To assist with handling this bounty, this plethora of texts is now widely used to develop and optimize statistical natural language processing (NLP) systems. Surprisingly, the use of more fragments of text to train these statistical NLP systems may not necessarily lead to improved performance. We hypothesize that those fragments that help the most with training are those that contain the desired information. Therefore, determining informativeness in text has become a central issue in our view of NLP. Recent developments in this field have spawned a number of solutions to identify informativeness in text. Nevertheless, a shortfall of most of these solutions is their dependency on the genre and domain of the text. In addition, most of them are not efficient regardless of the natural language processing problem areas. Therefore, we attempt to provide a more general solution to this NLP problem. This thesis takes a different approach to this problem by considering the underlying theme of a linguistic theory known as the Code Quantity Principle. This theory suggests that humans codify information in text so that readers can retrieve this information more efficiently. During the codification process, humans usually change elements of their writing ranging from characters to sentences. Examples of such elements are the use of simple words, complex words, function words, content words, syllables, and so on. This theory suggests that these elements have reasonable discriminating strength and can play a key role in distinguishing informativeness in natural language text. In another vein, Stylometry is a modern method to analyze literary style and deals largely with the aforementioned elements of writing. With this as background, we model text using a set of stylometric attributes to characterize variations in writing style present in it. We explore their effectiveness to determine informativeness in text. To the best of our knowledge, this is the first use of stylometric attributes to determine informativeness in statistical NLP. In doing so, we use texts of different genres, viz., scientific papers, technical reports, emails and newspaper articles, that are selected from assorted domains like agriculture, physics, and biomedical science. The variety of NLP systems that have benefitted from incorporating these stylometric attributes somewhere in their computational realm dealing with this set of multifarious texts suggests that these attributes can be regarded as an effective solution to identify informativeness in text. In addition to the variety of text genres and domains, the potential of stylometric attributes is also explored in some NLP application areas---including biomedical relation mining, automatic keyphrase indexing, spam classification, and text summarization---where performance improvement is both important and challenging. The success of the attributes in all these areas further highlights their usefulness

    Saving a Seat for a Sister: A Grounded Theory Approach Exploring the Journey of Women Reaching Top Policing Executive Positions

    Get PDF
    The world of women in law enforcement is a thought-provoking one that has received increasing attention both in academia as well as in practice over the past few decades. Even more intriguing, and despite advances in the profession, is the low number of women in executive leadership positions in law enforcement. There is a vast underrepresentation of women in top executive leadership positions across the 18,000 law enforcement agencies in the United States. The purpose of this study was to gain an understanding of the complex journey of women to top executive policing leadership positions. Embracing a positive psychology approach, the study used grounded theory in combination with situational analysis to answer one overarching question: What have been the experiences of women leaders in policing as they have progressed in the profession to executive rank? This allowed for a comprehensive exploration of the micro, or individual level factors, alongside the meso or macro factors, encompassing larger group interactions, social structures, and institutions, that from the women’s perception had been critical in their leadership experiences. The study offers a theoretical model—A Web of Intersections—as a framework for understanding the complex journey of women, and the social processes and multiple intersections they have learned to navigate that can in combination, help them to advance to top executive policing leadership positions. The women in this study are agentic and not simply following the lead. They are active, deliberate, and intentional participants in their own journeys, making critical and strategic decisions that can gain entry to policy decision-making that can result in sustainable change. This dissertation is available in open access at AURA: Antioch University Repository and Archive, http://aura.antioch.edu/ and OhioLINK ETD Center, https://etd.ohiolink.edu

    Quality of Experience-Enabled Social Networks

    Get PDF
    Social Networks (SNs), such as Facebook, Twitter and LinkedIn, have become ubiquitous in our daily life. However, as the number of SN users grows, the SN usage grows and there is higher demand for users’ Quality of Experience (QoE). For instance, some users would prefer to filter some posts, e.g. unwanted friendship requests and certain categories of posts, i.e. sports related posts. Users may also prefer to subscribe to a higher Quality of Service (QoS) level with their SN provider to have, for instance, higher priority on posting/retrieving. 3GPP 4G Evolved Packet Core (EPC)-Based systems are all IP network architectures that enable users to connect to mobile networks through their mobile devices and seamlessly change from one access technology to another. EPC systems enable service provisioning with guaranteed and differentiated end-to-end QoS. This thesis proposes a novel architecture that enables differentiated QoS and information filtering in SNs to improve the users QoE. The SN is deployed on top of 3GPP 4G EPC-Based systems, and it uses EPC services to enable guaranteed and differentiated QoS. The components of the proposed architecture interact through RESTful web services. This architecture allows users to filter posts using their own criteria and have priority over other users in posting and/or retrieving; thereby, improving users’ QoE. A proof of concept prototype tool has been implemented to illustrate the viability of the proposed architecture and its performance has been partially evaluated

    Working Papers: Astronomy and Astrophysics Panel Reports

    Get PDF
    The papers of the panels appointed by the Astronomy and Astrophysics survey Committee are compiled. These papers were advisory to the survey committee and represent the opinions of the members of each panel in the context of their individual charges. The following subject areas are covered: radio astronomy, infrared astronomy, optical/IR from ground, UV-optical from space, interferometry, high energy from space, particle astrophysics, theory and laboratory astrophysics, solar astronomy, planetary astronomy, computing and data processing, policy opportunities, benefits to the nation from astronomy and astrophysics, status of the profession, and science opportunities
    • …
    corecore