11,086 research outputs found

    Terminology localization guidelines for the national scenario

    Get PDF
    The paper is a preprint of the paper accepted to the LREC 2014 : The 9th edition of the Language Resources and Evaluation Conference scheduled May 28, 2014 - May 30, 2014 in Reykjavik (Iceland).This paper presents a set of principles and practical guidelines for terminology work in the national scenario to ensure a harmonized approach in term localization. These linguistic principles and guidelines are elaborated by the Terminology Commission in Latvia in the domain of Information and Communication Technology (ICT). We also present a novel approach in a corpus-based selection and an evaluation of the most frequently used terms. Analysis of the terms proves that, in general, in the normative terminology work in Latvia localized terms are coined according to these guidelines. We further evaluate how terms included in the database of official terminology are adopted in the general use such as newspaper articles, blogs, forums, websites etc. Our evaluation shows that in a non-normative context the official terminology faces a strong competition from other variations of localized terms. Conclusions and recommendations from lexical analysis of localized terms are provided. We hope that presented guidelines and approach in evaluation will be useful to terminology institutions, regulative authorities and researchers in different countries that are involved in the national terminology work.The research leading to these results has received funding from the research project “Optimization methods of large scale statistical models for innovative machine translation technologies” of European Regional Development Fund, contract nr. 2013/0038/2DP/2.1.1.1.0/13/APIA/VIAA/029

    ANNIS: a linguistic database for exploring information structure

    Get PDF
    In this paper, we discuss the design and implementation of our first version of the database "ANNIS" (ANNotation of Information Structure). For research based on empirical data, ANNIS provides a uniform environment for storing this data together with its linguistic annotations. A central database promotes standardized annotation, which facilitates interpretation and comparison of the data. ANNIS is used through a standard web browser and offers tier-based visualization of data and annotations, as well as search facilities that allow for cross-level and cross-sentential queries. The paper motivates the design of the system, characterizes its user interface, and provides an initial technical evaluation of ANNIS with respect to data size and query processing

    Ontology Localization

    Get PDF
    Nuestra meta principal en esta tesis es proponer una solución para construir una ontología multilingüe, a través de la localización automática de una ontología. La noción de localización viene del área de Desarrollo de Software que hace referencia a la adaptación de un producto de software a un ambiente no nativo. En la Ingeniería Ontológica, la localización de ontologías podría ser considerada como un subtipo de la localización de software en el cual el producto es un modelo compartido de un dominio particular, por ejemplo, una ontología, a ser usada por una cierta aplicación. En concreto, nuestro trabajo introduce una nueva propuesta para el problema de multilingüismo, describiendo los métodos, técnicas y herramientas para la localización de recursos ontológicos y cómo el multilingüismo puede ser representado en las ontologías. No es la meta de este trabajo apoyar una única propuesta para la localización de ontologías, sino más bien mostrar la variedad de métodos y técnicas que pueden ser readaptadas de otras áreas de conocimiento para reducir el costo y esfuerzo que significa enriquecer una ontología con información multilingüe. Estamos convencidos de que no hay un único método para la localización de ontologías. Sin embargo, nos concentramos en soluciones automáticas para la localización de estos recursos. La propuesta presentada en esta tesis provee una cobertura global de la actividad de localización para los profesionales ontológicos. En particular, este trabajo ofrece una explicación formal de nuestro proceso general de localización, definiendo las entradas, salidas, y los principales pasos identificados. Además, en la propuesta consideramos algunas dimensiones para localizar una ontología. Estas dimensiones nos permiten establecer una clasificación de técnicas de traducción basadas en métodos tomados de la disciplina de traducción por máquina. Para facilitar el análisis de estas técnicas de traducción, introducimos una estructura de evaluación que cubre sus aspectos principales. Finalmente, ofrecemos una vista intuitiva de todo el ciclo de vida de la localización de ontologías y esbozamos nuestro acercamiento para la definición de una arquitectura de sistema que soporte esta actividad. El modelo propuesto comprende los componentes del sistema, las propiedades visibles de esos componentes, las relaciones entre ellos, y provee además, una base desde la cual sistemas de localización de ontologías pueden ser desarrollados. Las principales contribuciones de este trabajo se resumen como sigue: - Una caracterización y definición de los problemas de localización de ontologías, basado en problemas encontrados en áreas relacionadas. La caracterización propuesta tiene en cuenta tres problemas diferentes de la localización: traducción, gestión de la información, y representación de la información multilingüe. - Una metodología prescriptiva para soportar la actividad de localización de ontologías, basada en las metodologías de localización usadas en Ingeniería del Software e Ingeniería del Conocimiento, tan general como es posible, tal que ésta pueda cubrir un amplio rango de escenarios. - Una clasificación de las técnicas de localización de ontologías, que puede servir para comparar (analíticamente) diferentes sistemas de localización de ontologías, así como también para diseñar nuevos sistemas, tomando ventaja de las soluciones del estado del arte. - Un método integrado para construir sistemas de localización de ontologías en un entorno distribuido y colaborativo, que tenga en cuenta los métodos y técnicas más apropiadas, dependiendo de: i) el dominio de la ontología a ser localizada, y ii) la cantidad de información lingüística requerida para la ontología final. - Un componente modular para soportar el almacenamiento de la información multilingüe asociada a cada término de la ontología. Nuestra propuesta sigue la tendencia actual en la integración de la información multilingüe en las ontologías que sugiere que el conocimiento de la ontología y la información lingüística (multilingüe) estén separados y sean independientes. - Un modelo basado en flujos de trabajo colaborativos para la representación del proceso normalmente seguido en diferentes organizaciones, para coordinar la actividad de localización en diferentes lenguajes naturales. - Una infraestructura integrada implementada dentro del NeOn Toolkit por medio de un conjunto de plug-ins y extensiones que soporten el proceso colaborativo de localización de ontologías

    Guidelines for multilingual linked data

    Get PDF
    In this article, we argue that there is a growing number of linked datasets in different natural languages, and that there is a need for guidelines and mechanisms to ensure the quality and organic growth of this emerging multilingual data network. However, we have little knowledge regarding the actual state of this data network, its current practices, and the open challenges that it poses. Questions regarding the distribution of natural languages, the links that are established across data in different languages, or how linguistic features are represented, remain mostly unanswered. Addressing these and other language-related issues can help to identify existing problems, propose new mechanisms and guidelines or adapt the ones in use for publishing linked data including language-related features, and, ultimately, provide metrics to evaluate quality aspects. In this article we review, discuss, and extend current guidelines for publishing linked data by focusing on those methods, techniques and tools that can help RDF publishers to cope with language barriers. Whenever possible, we will illustrate and discuss each of these guidelines, methods, and tools on the basis of practical examples that we have encountered in the publication of the datos.bne.es dataset

    Usability as a focus of multiprofessional collaboration: a teaching case study on user-centered translation

    Get PDF
    As professional communication needs are increasingly multilingual, the merging of translator and technical communicator roles has been predicted. However, it may be more advantageous for these two professional groups to increase cooperation. This means learning to identify and appreciate their distinct but mutually complementary core competencies. Since both professions share the ideology of being the user’s advocate, usability is a common denominator that can function as a focal point of collaboration. While many translation theories focus on the reader and the target context, usability methods have not traditionally been a part of translator training. An innovation called User-Centered Translation (UCT), which is a model based on usability and user-centered design, is intended to help translators speak the same language as technical communicators, and it offers concrete usability tools which have been missing from translation theories. In this teaching case study, we discuss the teaching of four UCT methods: personas, the implied reader, heuristic evaluation, and usability testing. We describe our teaching experiences, analyze student feedback on all four, and report on the implementation of a student assignment on heuristics. This case study suggests ways in which UCT can form an important nexus of professional skills and multiprofessional collaboration

    A Survey on Handover Management in Mobility Architectures

    Full text link
    This work presents a comprehensive and structured taxonomy of available techniques for managing the handover process in mobility architectures. Representative works from the existing literature have been divided into appropriate categories, based on their ability to support horizontal handovers, vertical handovers and multihoming. We describe approaches designed to work on the current Internet (i.e. IPv4-based networks), as well as those that have been devised for the "future" Internet (e.g. IPv6-based networks and extensions). Quantitative measures and qualitative indicators are also presented and used to evaluate and compare the examined approaches. This critical review provides some valuable guidelines and suggestions for designing and developing mobility architectures, including some practical expedients (e.g. those required in the current Internet environment), aimed to cope with the presence of NAT/firewalls and to provide support to legacy systems and several communication protocols working at the application layer

    Archetype Modeling Methodology

    Full text link
    [EN] Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism.This work was partially funded by grant DI-14-06564 (Doctorados Industriales) of the Ministerio de Economia y Competitividad of Spain. The authors would also thank the participants of all R&D projects that have served to evaluate and improve the presented methodology.Moner Cano, D.; Maldonado Segura, JA.; Robles Viejo, M. (2018). Archetype Modeling Methodology. Journal of Biomedical Informatics. 79:71-81. https://doi.org/10.1016/j.jbi.2018.02.003S71817

    Centre-Periphery and Specialization in the E. U. : An Analysis From a New Economic Geography Perspective

    Get PDF
    This paper attempts an assessment of a number of basic statistical indicators of EU regions and countries from a New Economic Geography (NEG) perspective. After a brief overview of the underlying theoretical framework, two important hypotheses of NEG’s theoretical models are examined for the case of EU regions: (a) the existence of a center-periphery pattern, with the use of indicators measuring the “home market effectâ€; (b) the existence of Marshall-type “economies of localizationâ€, as well as of “dynamic external economiesâ€, on the basis of “knowledge-intensive†and “human capital†indicators. This analysis takes place on a regional scale. An assessment of the evolution of specialization in EU countries is also undertaken with the use of an index of “regional specializationâ€. The analysis provides clear indications that, the deepening of European integration led to both phenomena described by NEG models: (a) the strengthening of two types of concentrations – “the enlargement of the home market†and “local external economies†- in the traditional industrial centres of the EU; (b) an increase in the degree of specialization of its member-states. Policy implications point to the strengthening of factors that could lead to the development of new dynamic centres in peripheral EU regions.

    Transalpine transport policies: towards a shared approach

    Get PDF
    In recent years crossing the Alps has become a central issue in European transport policy. The increase in global transport flow has contributed to bringing two themes to the centre of attention : making transalpine transportation easier and reducing the negative impact of this on the Alpine environment. The resulting debate has shown that there are conflicting transport policy proposals. The main reasons behind such differences are not so much the different evaluations of the trends in transalpine transport, and not only the diverging local and national interests, but rather the implicit reference to three alternative policy paradigms: ‘competition’, ‘sustainability’ and ‘de-growth’. The aim of this paper is twofold : 1) to identify the links between policy paradigms and the transalpine transport policy framework ; 2) to propose a multilevel and multi-criteria approach to transalpine transport policy. The explicit consideration of policy paradigms and the structured participation of citizens and stakeholders are at the heart of such a new and more widely shared approach.Alps; Transport Policy; Participated multi-criteria; Policy paradigms

    Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey

    Get PDF
    Growing progress in sensor technology has constantly expanded the number and range of low-cost, small, and portable sensors on the market, increasing the number and type of physical phenomena that can be measured with wirelessly connected sensors. Large-scale deployments of wireless sensor networks (WSN) involving hundreds or thousands of devices and limited budgets often constrain the choice of sensing hardware, which generally has reduced accuracy, precision, and reliability. Therefore, it is challenging to achieve good data quality and maintain error-free measurements during the whole system lifetime. Self-calibration or recalibration in ad hoc sensor networks to preserve data quality is essential, yet challenging, for several reasons, such as the existence of random noise and the absence of suitable general models. Calibration performed in the field, without accurate and controlled instrumentation, is said to be in an uncontrolled environment. This paper provides current and fundamental self-calibration approaches and models for wireless sensor networks in uncontrolled environments
    corecore