2,933 research outputs found

    On the Mono- and Cross-Language Detection of Text Re-Use and Plagiarism

    Full text link
    Barrón Cedeño, LA. (2012). On the Mono- and Cross-Language Detection of Text Re-Use and Plagiarism [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16012Palanci

    Semantic recovery of traceability links between system artifacts

    Get PDF
    This paper introduces a mechanism to recover traceability links between the requirements and logical models in the context of critical systems development. Currently, lifecycle processes are covered by a good number of tools that are used to generate different types of artifacts. One of the cornerstone capabilities in the development of critical systems lies in the possibility of automatically recovery traceability links between system artifacts generated in different lifecycle stages. To do so, it is necessary to establish to what extent two or more of these work products are similar, dependent or should be explicitly linked together. However, the different types of artifacts and their internal representation depict a major challenge to unify how system artifacts are represented and, then, linked together. That is why, in this work, a concept-based representation is introduced to provide a semantic and unified description of any system artifact. Furthermore, a traceability function is defined and implemented to exploit this new semantic representation and to support the recovery of traceability links between different types of system artifacts. In order to evaluate the traceability function, a case study in the railway domain is conducted to compare the precision and recall of recovery traceability links between text-based requirements and logical model elements. As the main outcome of this work, the use of a concept-based paradigm to represent that system artifacts are demonstrated as a building block to automatically recover traceability links within the development lifecycle of critical systems.The research leading to these results has received funding from the H2020 ECSEL Joint Undertaking (JU) under Grant Agreement No. 826452 \Arrowhead Tools for Engineering of Digitalisation Solutions" and from speci¯c national programs and/or funding authorities

    Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions

    Full text link
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. Our goal is to put forward a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 27 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders

    Academic integrity : a call to research and action

    Get PDF
    Originally published in French:L'urgence de l'intégrité académique, Éditions EMS, Management & société, Caen, 2021 (ISBN 978-2-37687-472-0).The urgency of doing complements the urgency of knowing. Urgency here is not the inconsequential injunction of irrational immediacy. It arises in various contexts for good reasons, when there is a threat to the human existence and harms to others. Today, our knowledge based civilization is at risk both by new production models of knowledge and by the shamelessness of knowledge delinquents, exposing the greatest number to important risks. Swiftly, the editors respond to the diagnostic by setting up a reference tool for academic integrity. Across multiple dialogues between the twenty-five chapters and five major themes, the ethical response shapes pragmatic horizons for action, on a range of disciplinary competencies: from science to international diplomacy. An interdisciplinary work indispensable for teachers, students and university researchers and administrators

    Patterns in reef fish assemblages as determined by baited remote underwater video (BRUV) along the western side of False Bay: effects of site, depth and protection status

    Get PDF
    Includes bibliographical references.By protecting ecosystems from exploitation, no-take zones are considered the principal means by which marine species and their populations can be conserved for future generations. To be successful, no-take zones require continuous monitoring of the fish community to evaluate the response of marine ecosystems to anthropogenic impacts and environmental change. Obtaining an understanding of the patterns of species composition, abundance, and distribution, allows monitoring efforts to be focused, efficient, and properly interpreted. Baited remote underwater video (BRUV) was used to examine the effects of site, depth, andlevel of protection, on the diversity and relative abundance of temperate reef fish within the Table Mountain National Park (TMNP) Marine Protected Area (MPA). Four notake zones and adjacent exploited areas, subject to conventional management restrictions, were sampled monthly over a four-month period. A total of 36 species from three marine classes and 18 families was recorded. Species diversity (Shannon-Wiener index) was found to increase with sites closest to the mouth of the bay, whilst species abundance was found to increase with depth. Results indicated no consistent response to protection status among the sites at either the community or individual species level. However, the oldest no-take zone proclaimed for the purposes of reef conservation was found to harbour higher species diversity and a higher relative abundance of fish compared to its respective exploited area. Furthermore, the similar frequencies in which hottentot (Pachymetopon blochii) and roman (Chrysoblephus laticeps) were observed across the four study sites, suggests that these two commercially-important species are successfully recruiting inside and outside the no-take zones. These results indicate that physical factors, rather than protection status, within False Bay influence patterns of fish assemblage composition, abundance, and distribution. In future, and to improve comparability, assessments within the TMNP MPA should be designed to target similar locations and depth ranges within the bay. The success of no-take zones must be evaluated according to their individual design and management goals

    The New Legal Landscape for Text Mining and Machine Learning

    Get PDF
    Now that the dust has settled on the Authors Guild cases, this Article takes stock of the legal context for TDM research in the United States. This reappraisal begins in Part I with an assessment of exactly what the Authors Guild cases did and did not establish with respect to the fair use status of text mining. Those cases held unambiguously that reproducing copyrighted works as one step in the process of knowledge discovery through text data mining was transformative, and thus ultimately a fair use of those works. Part I explains why those rulings followed inexorably from copyright\u27s most fundamental principles. It also explains why the precedent set in the Authors Guild cases is likely to remain settled law in the United States. Parts II and III address legal considerations for would-be text miners and their supporting institutions beyond the core holding of the Authors Guild cases. The Google Books and HathiTrust cases held, in effect, that copying expressive works for non-expressive purposes was justified as fair use. This addresses the most significant issue for the legality of text data mining research in the United States; however, the legality of non-expressive use is far from the only legal issue that researchers and their supporting institutions must confront if they are to realize the full potential of these technologies. Neither case addressed issues arising under contract law, laws prohibiting computer hacking, laws prohibiting the circumvention of technological protection measures (i.e., encryption and other digital locks), or cross-border copyright issues. Furthermore, although Google Books addressed the display of snippets of text as part of the communication of search results, and both Authors Guild cases addressed security issues that might bear upon the fair use claim, those holdings were a product of the particular factual circumstances of those cases and can only be extended cautiously to other contexts. Specifically, Part II surveys the legal status of TDM research in other important jurisdictions and explains some of the key differences between the law in the United States and the law in the European Union. It also explains how researchers can predict which law will apply in different situations. Part III sets out a four-stage model of the lifecycle of text data mining research and uses this model to identify and explain the relevant legal issues beyond the core holdings of the Authors Guild cases in relation to TDM as a non-expressive use

    Design of a Controlled Language for Critical Infrastructures Protection

    Get PDF
    We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen

    The Future of Information Sciences : INFuture2011 : Information Sciences and e-Society

    Get PDF
    corecore