5,581 research outputs found

    Automatic domain-specific learning: towards a methodology for ontology enrichment

    Get PDF
    [EN] At the current rate of technological development, in a world where enormous amount of data are constantly created and in which the Internet is used as the primary means for information exchange, there exists a need for tools that help processing, analyzing and using that information. However, while the growth of information poses many opportunities for social and scientific advance, it has also highlighted the difficulties of extracting meaningful patterns from massive data. Ontologies have been claimed to play a major role in the processing of large-scale data, as they serve as universal models of knowledge representation, and are being studied as possible solutions to this. This paper presents a method for the automatic expansion of ontologies based on corpus and terminological data exploitation. The proposed ¿ontology enrichment method¿ (OEM) consists of a sequence of tasks aimed at classifying an input keyword automatically under its corresponding node within a target ontology. Results prove that the method can be successfully applied for the automatic classification of specialized units into a reference ontology.Financial support for this research has been provided by the DGI, Spanish Ministry of Education and Science, grant FFI2011-29798-C0201.Ureña Gómez-Moreno, P.; Mestre-Mestre, EM. (2017). Automatic domain-specific learning: towards a methodology for ontology enrichment. LFE. Revista de Lenguas para Fines Específicos. 23(2):63-85. http://hdl.handle.net/10251/148357S638523

    Language technologies for an eLearning scenario

    Get PDF
    One of the problems with eLearning platforms when collating together documents from different resources is the retrieval of documents and their accessibility. By providing documents with additional metadata using Language Technologies one enables users to access information more effectively. In this paper we present an overview of the objectives and results achieved for the LT4eL Project, which aims at providing Language Technologies to eLearning platforms and to integrate semantic knowledge to facilitate the management, distribution and retrieval of the learning material.peer-reviewe

    Design of a Controlled Language for Critical Infrastructures Protection

    Get PDF
    We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen

    Utilizing artificial intelligence in perioperative patient flow:systematic literature review

    Get PDF
    Abstract. The purpose of this thesis was to map the existing landscape of artificial intelligence (AI) applications used in secondary healthcare, with a focus on perioperative care. The goal was to find out what systems have been developed, and how capable they are at controlling perioperative patient flow. The review was guided by the following research question: How is AI currently utilized in patient flow management in the context of perioperative care? This systematic literature review examined the current evidence regarding the use of AI in perioperative patient flow. A comprehensive search was conducted in four databases, resulting in 33 articles meeting the inclusion criteria. Findings demonstrated that AI technologies, such as machine learning (ML) algorithms and predictive analytics tools, have shown somewhat promising outcomes in optimizing perioperative patient flow. Specifically, AI systems have proven effective in predicting surgical case durations, assessing risks, planning treatments, supporting diagnosis, improving bed utilization, reducing cancellations and delays, and enhancing communication and collaboration among healthcare providers. However, several challenges were identified, including the need for accurate and reliable data sources, ethical considerations, and the potential for biased algorithms. Further research is needed to validate and optimize the application of AI in perioperative patient flow. The contribution of this thesis is summarizing the current state of the characteristics of AI application in perioperative patient flow. This systematic literature review provides information about the features of perioperative patient flow and the clinical tasks of AI applications previously identified

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    Large Scale Distributed Knowledge Infrastructures

    Get PDF

    Quality aspects in institutional translation

    Get PDF
    The purpose of this volume is to explore key issues, approaches and challenges to quality in institutional translation by confronting academics’ and practitioners’ perspectives. What the reader will find in this book is an interplay of two approaches: academic contributions providing the conceptual and theoretical background for discussing quality on the one hand, and chapters exploring selected aspects of quality and case studies from both academics and practitioners on the other. Our aim is to present these two approaches as a breeding ground for testing one vis-à-vis the other. This book studies institutional translation mostly through the lens of the European Union (EU) reality, and, more specifically, of EU institutions and bodies, due to the unprecedented scale of their multilingual operations and the legal and political importance of translation. Thus, it is concerned with the supranational (international) level, deliberately leaving national and other contexts aside. Quality in supranational institutions is explored both in terms of translation processes and their products – the translated texts

    Quality aspects in institutional translation

    Get PDF
    "The purpose of this volume is to explore key issues, approaches and challenges to quality in institutional translation by confronting academics’ and practitioners’ perspectives. What the reader will find in this book is an interplay of two approaches: academic contributions providing the conceptual and theoretical background for discussing quality on the one hand, and chapters exploring selected aspects of quality and case studies from both academics and practitioners on the other. Our aim is to present these two approaches as a breeding ground for testing one vis-à-vis the other. This book studies institutional translation mostly through the lens of the European Union (EU) reality, and, more specifically, of EU institutions and bodies, due to the unprecedented scale of their multilingual operations and the legal and political importance of translation. Thus, it is concerned with the supranational (international) level, deliberately leaving national and other contexts aside. Quality in supranational institutions is explored both in terms of translation processes and their products – the translated texts.

    Text representation using canonical data model

    Get PDF
    Developing digital technology and the World Wide Web has led to the increase of digital documents that are used for various purposes such as publishing, in turn, appears to be connected to raise the awareness for the requirement of effective techniques that can help during the search and retrieval of text. Text representation plays a crucial role in representing text in a meaningful way. The clarity of representation depends tightly on the selection of the text representation methods. Traditional methods of text representation model documents such as term-frequency invers document frequency (TF-IDF) ignores the relationship and meanings of words in documents. As a result the sparsity and semantic problem that is predominant in textual document are not resolved. In this research, the problem of sparsity and semantic is reduced by proposing Canonical Data Model (CDM) for text representation. CDM is constructed through an accumulation of syntactic and semantic analysis. A number of 20 news group dataset were used in this research to test CDM validity for text representation. The text documents goes through a number of pre-processing process and syntactic parsing in order to identify the sentence structure. Text documents goes through a number of preprocessing steps and syntactic parsing in order to identify the sentence structure and then TF-IDF method is used to represent the text through CDM. The findings proved that CDM was efficient to represent text, based on the model validation through language experts‟ review and the percentage of the similarity measurement methods
    • 

    corecore