10,849 research outputs found

    Semantic Modeling of Analytic-based Relationships with Direct Qualification

    Full text link
    Successfully modeling state and analytics-based semantic relationships of documents enhances representation, importance, relevancy, provenience, and priority of the document. These attributes are the core elements that form the machine-based knowledge representation for documents. However, modeling document relationships that can change over time can be inelegant, limited, complex or overly burdensome for semantic technologies. In this paper, we present Direct Qualification (DQ), an approach for modeling any semantically referenced document, concept, or named graph with results from associated applied analytics. The proposed approach supplements the traditional subject-object relationships by providing a third leg to the relationship; the qualification of how and why the relationship exists. To illustrate, we show a prototype of an event-based system with a realistic use case for applying DQ to relevancy analytics of PageRank and Hyperlink-Induced Topic Search (HITS).Comment: Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015

    Safe to Be Open: Study on the Protection of Research Data and Recommendations for Access and Usage

    Get PDF
    Openness has become a common concept in a growing number of scientific and academic fields. Expressions such as Open Access (OA) or Open Content (OC) are often employed for publications of papers and research results, or are contained as conditions in tenders issued by a number of funding agencies. More recently the concept of Open Data (OD) is of growing interest in some fields, particularly those that produce large amounts of data – which are not usually protected by standard legal tools such as copyright. However, a thorough understanding of the meaning of Openness – especially its legal implications – is usually lacking. Open Access, Public Access, Open Content, Open Data, Public Domain. All these terms are often employed to indicate that a given paper, repository or database does not fall under the traditional “closed” scheme of default copyright rules. However, the differences between all these terms are often largely ignored or misrepresented, especially when the scientist in question is not familiar with the law generally and copyright in particular – a very common situation in all scientific fields. On 17 July 2012 the European Commission published its Communication to the European Parliament and the Council entitled “Towards better access to scientific information: Boosting the benefits of public investments in research”. As the Commission observes, “discussions of the scientific dissemination system have traditionally focused on access to scientific publications – journals and monographs. However, it is becoming increasingly important to improve access to research data (experimental results, observations and computer-generated information), which forms the basis for the quantitative analysis underpinning many scientific publications”. The Commission believes that through more complete and wider access to scientific publications and data, the pace of innovation will accelerate and researchers will collaborate so that duplication of efforts will be avoided. Moreover, open research data will allow other researchers to build on previous research results, as it will allow involvement of citizens and society in the scientific process. In the Communication the Commission makes explicit reference to open access models of publications and dissemination of research results, and the reference is not only to access and use but most significantly to reuse of publications as well as research data. The Communication marks an official new step on the road to open access to publicly funded research results in science and the humanities in Europe. Scientific publications are no longer the only elements of its open access policy: research data upon which publications are based should now also be made available to the public. As noble as the open access goal is, however, the expansion of the open access policy to publicly funded research data raises a number of legal and policy issues that are often distinct from those concerning the publication of scientific articles and monographs. Since open access to research data – rather than publications – is a relatively new policy objective, less attention has been paid to the specific features of research data. An analysis of the legal status of such data, and on how to make it available under the correct licence terms, is therefore the subject of the following sections

    Re-framing student academic freedom: a capability perspective

    Get PDF
    The scholarly debate about academic freedom focuses almost exclusively on the rights of academic faculty. Student academic freedom is rarely discussed and is normally confined to debates connected with the politicisation of the curriculum. Concerns about (student) freedom of speech reflect the dominant role of negative rights in the analysis of academic freedom representing ‘threats’ to academic freedom in terms of rights which may be taken away from a person rather than conferred on them. This paper draws on the distinction between negative and positive rights and the work of Sen (1999) to re-frame student academic freedom as capability. It is argued that capability deprivation has a negative impact on the extent to which students can exercise academic freedom in practice and that student capability can be enhanced through a liberal education that empowers rather than domesticates students

    Designometry – Formalization of Artifacts and Methods

    Get PDF
    Two interconnected surveys are presented, one of artifacts and one of designometry. Artifacts are objects, which have an originator and do not exist in nature. Designometry is a new field of study, which aims to identify the originators of artifacts. The space of artifacts is described and also domains, which pursue designometry, yet currently doing so without collaboration or common methodologies. On this basis, synergies as well as a generic axiom and heuristics for the quest of the creators of artifacts are introduced. While designometry has various areas of applications, the research of methods to detect originators of artificial minds, which constitute a subgroup of artifacts, can be seen as particularly relevant and, in the case of malevolent artificial minds, as contribution to AI safety

    Tongue-Tied by Authorities: Library of Congress Vocabularies and the Shakespeare Authorship Question

    Get PDF
    Despite the existence of a vast literature reflecting hundreds of years of scholarship questioning the authorship of the works of Shakespeare, the conventional Library of Congress Name Authority File and Library of Congress Subject Headings (LCSH) are unable to accurately describe this literature owing to their assumption that the author was William Shakspere of Stratford-upon-Avon. Adopting a pragmatic, philosophically realist perspective based in social epistemology, this article highlights past and current deficiencies in the authority records concerning Shakespeare and proposes changes that would better reflect the nature and purpose of this literature, as well as the historic signifiers of the named persons in question.10.1080/01639374.2022.212447

    How to do research on the societal impact of research? Studies from a semantic perspective

    Get PDF
    We review some recent works of our research lab that have applied novel text mining techniques to the issue of research impact assessment. The techniques are Semantic Hypergraphs and Lexicon-based Named Entity Recognition. By using these techniques, we address two distinct and open issues in research impact assessment: the epistemological and logical status of impact assessment, and the construction of quantitative indicators. © 2021 18th International Conference on Scientometrics and Informetrics, ISSI 2021. All rights reserved

    A training curriculum for retrieving, structuring, and aggregating information derived from the biomedical literature and large-scale data repositories

    Get PDF
    Background: Biomedical research over the past two decades has become data and information rich. This trend has been in large part driven by the development of systems-scale molecular profiling capabilities and by the increasingly large volume of publications contributed by the biomedical research community. It has therefore become important for early career researchers to learn to leverage this wealth of information in their own research. Methods: Here we describe in detail a training curriculum focusing on the development of foundational skills necessary to retrieve, structure, and aggregate information available from vast stores of publicly available information. It is provided along with supporting material and an illustrative use case. The stepwise workflow encompasses; 1) Selecting a candidate gene; 2) Retrieving background information about the gene; 3) Profiling its literature; 4) Identifying in the literature instances where its transcript abundance changes in blood of patients; 5) Retrieving transcriptional profiling data from public blood transcriptome and reference datasets; and 6) Drafting a manuscript, submitting it for peer-review, and publication. Results: This resource may be leveraged by instructors who wish to organize hands-on workshops. It can also be used by independent trainees as a self-study toolkit. The workflow presented as proof-of- concept was designed to establish a resource for assessing a candidate gene’s potential utility as a blood transcriptional biomarker. Trainees will learn to retrieve literature and public transcriptional profiling data associated with a specific gene of interest. They will also learn to extract, structure, and aggregate this information to support downstream interpretation efforts as well as the preparation of a manuscript. Conclusions: This resource should support early career researchers in their efforts to acquire skills that will permit them to leverage the vast amounts of publicly available large-scale profiling data

    Veracity and velocity of social media content during breaking news: analysis of November 2015 Paris shootings

    No full text
    Social media sources are becoming increasingly important in journalism. Under breaking news deadlines semi-automated support for identification and verification of content is critical. We describe a large scale content-level analysis of over 6 million Twitter, You Tube and Instagram records covering the first 6 hours of the November 2015 Paris shootings. We ground our analysis by tracing how 5 ground truth images used in actual news reports went viral. We look at velocity of newsworthy content and its veracity with regards trusted source attribution. We also examine temporal segmentation combined with statistical frequency counters to identify likely eyewitness content for input to real-time breaking content feeds. Our results suggest attribution to trusted sources might be a good indicator of content veracity, and that temporal segmentation coupled with frequency statistical metrics could be used to highlight in real-time eyewitness content if applied with some additional text filters
    corecore