1,257 research outputs found

    Ontologías para la terminología : por qué, cuándo, cómo

    Get PDF
    Aquest article tracta sobre la utilitat que pot tenir per a un terminòleg la integració d'una ontologia en la seva feina, els criteris que cal considerar en el cas de plantejar-se fer-ho, les pautes que cal seguir i les eines que té a la seva disposició. Ofereix una visió actualitzada de l'àmbit d'aplicació de les ontologies, des de la perspectiva del Web Semàntic.Este artículo discute la utilidad que puede tener para el terminólogo la integración de una ontología en su trabajo, los criterios a considerar en el caso de plantearse hacerlo, las pautas a seguir y las herramientas a su disposición. Se ofrece una visión actualizada del ámbito de aplicación de las ontologías, desde la perspectiva de la Web Semántica.This article discusses the usefulness of integrating an ontology in one's work when dealing with terminology, the criteria to take into consideration if this is a consideration, the guidelines to follow and the tools available. This article offers an updated review of the application of ontologies from the perspective of Web Semántica

    Lingmotif: una Herramienta de Análisis de Sentimiento Enfocada en el Usuario

    Get PDF
    In this paper, we describe Lingmotif, a lexicon-based, linguistically-motivated, user-friendly, GUI-enabled, multi-platform, Sentiment Analysis desktop application. Lingmotif can perform SA on any type of input texts, regardless of their length and topic. The analysis is based on the identification of sentiment-laden words and phrases contained in the application's rich core lexicons, and employs context rules to account for sentiment shifters. It offers easy-to-interpret visual representations of quantitative data, as well as a detailed, qualitative analysis of the text in terms of its sentiment. Lingmotif can also take user-provided plugin lexicons in order to account for domain-specific sentiment expression. As of version 1.0, Lingmotif analyzes English and Spanish texts. Lingmotif thus aims to become a general-purpose Sentiment Analysis tool for discourse analysis, rhetoric, psychology, marketing, the language industries, and others.En este artículo se describe Lingmotif, una aplicación de Análisis de Sentimiento multi-plataforma, con interfaz gráfica de usuario amigable, motivada lingüísticamente y basada en léxico. Lingmotif efectúa Análisis de Sentimiento sobre cualquier tipo de texto, independientemente de su tamaño o tema. El análisis se basa en la identificación en el texto de palabras y frases con carga afectiva, contenidas en los diccionarios de la aplicación, y aplica reglas de contexto para dar cabida a modificadores del sentimiento. Ofrece representaciones gráficas fáciles de interpretar de los datos cuantitativos, así como un análisis detallado del texto. Lingmotif también puede utilizar léxicos del usuario a modo de plugins, de tal modo que es posible analizar de forma efectiva la expresión del sentimiento en dominios específicos. La versión 1.0 de Lingmotif está preparada para trabajar con textos en español e inglés. De este modo, se conforma como una herramienta de propósito general en el ámbito del Análisis de Sentimiento para el análisis del discurso, retórica, psicología, marketing, las industrias de la lengua y otras.This research was supported by Spain’s MINECO through the funding of project Lingmotif2 (FFI2016-78141-P)

    Identifying Polarity in Financial Texts for Sentiment Analysis: A Corpus-based Approach

    Get PDF
    AbstractIn this paper we describe our methodology to integrate domain-specific sentiment analysis in a lexicon-based system initially designed for general language texts. Our approach to dealing with specialized domains is based on the idea of “plug-in” lexical resources which can be applied on demand. A simple 3-step model based on the weirdness ratio measure is proposed to extract candidate terms from specialized corpora, which are then matched against our existing general-language polarity database to obtain sentiment-bearing words whose polarity is domain-specific

    The expression of sentiment in user reviews of hotels

    Get PDF
    The linguistic expression of sentiment, understood as the polarity of an opinion, is known to be domain-specific to a certain extent (Aue & Gamon, 2005; Choi et al., 2009). Even though many words and expressions convey the same evaluation across domains (e.g., “excellent”, “terrible”), many others acquire a more precise semantic orientation within a specific domain. For example, features such as size or location (and the lexical expressions that are used to express them) may or may not convey semantic orientation depending on the topic. In Sentiment Analysis (SA), it is critical that domain-specific expressions of sentiment be accounted for (Tan et al., 2007) if the system is to be useful to those who wish to explore the polarity of texts belonging in that domain. The software tool Lingmotif (Moreno-Ortiz, 2016) will be used to explore a corpus of hotel reviews in the English language. Lingmotif is a lexicon-based, linguistically-motivated, user-friendly, GUI-enabled, multi-platform, Sentiment Analysis desktop application. Lingmotif can perform SA on any type of input texts, regardless of size and topic. The analysis is based on the identification of sentiment-laden words and phrases contained in the application's rich core lexicons, and employs context rules to account for sentiment shifters. It offers easy-to-interpret visual representations of quantitative data (text polarity, sentiment intensity, sentiment profile), as well as a detailed, qualitative analysis of the text in terms of its sentiment. Lingmotif can also take user-provided plugin lexicons in order to account for domain-specific sentiment expression. In this paper, we describe our procedure to identify domain-specific lexical cues for the domain of user reviews of Spanish hotels. We made use of a recently compiled corpus of reviews from the online travel agency booking site booking.com. This corpus was analyzed entirely with Lingmotif using only its core (i.e., general-language lexicon), and then manually analyzed the results to find errors and omissions produced by the lack of specialized language cues. We then encoded the identified lexical cues as a Lingmotif plugin lexicon and reran the analysis with it. This methodology allowed us, first, to obtain a very concrete description of the expression of sentiment in this domain, and, from a practical perspective, to precisely measure to what extent this expression is domain-dependent.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Treatment alternatives for the rehabilitation of the posterior edentulous maxilla

    Get PDF
    Rehabilitation of the edentulous maxilla with implant-supported fixed dental prostheses can represent a significant clinical challenge due to limited bone availability and surgical access, among other factors. This review addresses several treatment options to replace missing teeth in posterior maxillary segments, namely the placement of standard implants in conjunction with maxillary sinus floor augmentation, short implants, tilted implants, and distal cantilever extensions. Pertinent technical information and a concise summary of relevant evidence on the reported outcomes of these different therapeutic approaches are presented, along with a set of clinical guidelines to facilitate decision-making processes and optimize the outcomes of therapy

    Strategies for the analysis of large social media corpora: sampling and keyword extraction methods

    Get PDF
    In the context of the COVID-19 pandemic, social media platforms such as Twitter have been of great importance for users to exchange news, ideas, and perceptions. Researchers from fields such as discourse analysis and the social sciences have resorted to this content to explore public opinion and stance on this topic, and they have tried to gather information through the compilation of large-scale corpora. However, the size of such corpora is both an advantage and a drawback, as simple text retrieval techniques and tools may prove to be impractical or altogether incapable of handling such masses of data. This study provides methodological and practical cues on how to manage the contents of a large-scale social media corpus such as Chen et al. (JMIR Public Health Surveill 6(2):e19273, 2020) COVID-19 corpus. We compare and evaluate, in terms of efficiency and efficacy, available methods to handle such a large corpus. First, we compare different sample sizes to assess whether it is possible to achieve similar results despite the size difference and evaluate sampling methods following a specific data management approach to storing the original corpus. Second, we examine two keyword extraction methodologies commonly used to obtain a compact representation of the main subject and topics of a text: the traditional method used in corpus linguistics, which compares word frequencies using a reference corpus, and graph-based techniques as developed in Natural Language Processing tasks. The methods and strategies discussed in this study enable valuable quantitative and qualitative analyses of an otherwise intractable mass of social media data.Funding for open access publishing: Universidad de Málaga/CBUA. This work was funded by the Spanish Ministry of Science and Innovation [Grant No. PID2020-115310RB-I00], the Regional Govvernment of Andalusia [Grant No. UMA18-FEDERJA-158] and the Spanish Ministry of Education and Vocational Training [Grant No. FPU 19/04880]. Funding for open access charge: Universidad de Málaga / CBU

    Mapping of political events related to the COVID-19 pandemic on Twitter using topic modelling and keywords over time.

    Get PDF
    This research aims to study the relationship between actual, real-world events related to the COVID-19 pandemic and the impact these events produced on social media. To achieve this objective, we employ topic modelling and keyword extraction techniques. Topic modelling is a Natural Language Processing technique that attempts to identify topics automatically from a collection of documents (Vayansky and Kumar, 2020). This is similar to keyword extraction but, unlike this, topic modelling algorithms return clusters of words that make up the topic. Thus, a second objective is to compare the results of these two methods when it comes to identifying the salient topics in a corpus. We have used the publicly available and multilingual COVID-19 Twitter dataset collected from January 21, 2020 (and still ongoing) available via the COVID-19-TweetsIDs GitHub repository (Chen, Lerman & Ferrara, 2020). For this study, we will focus on tweets written in English from 2020 and 2021. We limited our study to the years 2020 to 2021, which contains 1 billion tweets (31 billion tokens), and extracted a random, time-stratified sample of 0,1%, which resulted in a total of approximately 1 million tweets (31 million tokens). In terms of methods, we employed unsupervised machine learning methods for both tasks. For topic modelling we used BERT embeddings and the BERTopic library (Grootendorst, 2022). Our script generates a full list of topics and assigned terms, a coherence score, and several data visualisations, such as topics-over-time graphs, heatmaps, and topic hierarchies. For keyword extraction, we used TextRank (Mihalcea & Tarau, 2004), a language-independent, graph-based ranking model. We then compare results returned by both methods in terms of usefulness and, finally, provide an interpretation of results by relating the extracted topics to the situation of the global pandemic at different stages of the crisis.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Propuesta plan de manejo ambiental para equipos de cómputo en el proyecto Ciudad Bolívar localidad digital

    Get PDF
    El presente documento pretende establecer los lineamientos para la formulación de una propuesta del Plan de Manejo Ambiental (PMA) para los equipos de cómputo adquiridos en el proyecto Ciudad Bolívar Localidad Digital, basado en el Plan Integral de Gestión Ambiental (PIGA) de la Empresa de Telecomunicaciones de Bogotá(ETB) quien es el gestor del mismo. Como estructura base de éste proyecto se han planteado las etapas de diagnóstico, evaluación, formulación, seguimiento y control, y el plan de contingencia, desarrollados a partir de los esquemas a continuación descritos. Se presenta la línea base que desarrolla los aspectos que lo rodean detallando los ámbitos físicos y socioeconómicos, además del área de influencia, permitiendo definir el marco necesario para plantear una estructura de identificación y evaluación de los impactos ambientales generados. Seguidamente, dentro de los principales componentes del plan, se encuentran cuatro programas diseñados para la prevención y mitigación de los impactos según sea el caso, presentando las características propias del evento objeto de manejo, el momento de ejecución, las actividades involucradas y el desarrollo de su control y monitoreo. Consecuentemente se desarrolla el plan de monitoreo establecido para todas las etapas del proyecto y fundamentado en el análisis y control de la calidad de la información obtenida por medio de indicadores determinados para cada programa planteado. Finalmente se diseñó el plan de contingencia que propende por el control de emergencias ambientales posibles en el área de influencia del proyecto, fundamentado en los riesgos exógenos y endógenos que puedan presentarse en los portales interactivo

    Diseño e implementación de un sistema de control y monitoreo de manejo de lotes para los procesos de: almacenaje, transporte y despacho de materias primas en los predios de una planta de elaboración de cerveza

    Get PDF
    El sistema de manejo de granos constituye la primera fase en el proceso de elaboración de cerveza y desempeña un papel muy importante en la dosificación de materias primas. El siguiente articulo presenta la implementación de un sistema de control capaz de manejar los procesos de recepción, transporte, almacenaje y despacho de dos tipos de productos: arroz y malta (cebada malteada), este último utilizado en diferentes variedades. Se considera el control de lotes y dosificación de materias primas con la utilización de un software de manejo de recetas de producción para lo cual se diseña un nuevo modelo de proceso que incluye sus respectivas: unidades, fases y variables de operación. Para el monitoreo y control del proceso se implementa un sistema SCADA (Sistema de control y adquisición de datos) instalado en varios computadores. El control lógico de rutinas de funcionamiento se programa en un PLC principal que integra varios módulos remotos de adquisición de señales mediante la utilización de una red industrial Profibus DP

    Papilla preservation periodontal surgery in periodontal reconstruction for deep combined intra-suprabony defects. Retrospective analysis of a registry-based cohort

    Get PDF
    Suprabony defects are the most prevalent defects and there is very little evidence on their treatment. This study aims to assess the effectiveness of papilla preservation periodontal surgery in the periodontal reconstruction of combined deep intra-suprab
    corecore