295 research outputs found

    Extração de conhecimento a partir de fontes semi-estruturadas

    Get PDF
    The increasing number of small, cheap devices, full of sensing capabilities lead to an untapped source of data that can be explored to improve and optimize multiple systems, from small-scale home automation to large-scale applications such as agriculture monitoring, traffic flow and industrial maintenance prediction. Yet, hand in hand with this growth, goes the increasing difficulty to collect, store and organize all these new data. The lack of standard context representation schemes is one of the main struggles in this area. Furthermore, conventional methods for extracting knowledge from data rely on standard representations or a priori relations. These a priori relations add latent information to the underlying model, in the form of context representation schemes, table relations, or even ontologies. Nonetheless, these relations are created and maintained by human users. While feasible for small-scale scenarios or specific areas, this becomes increasingly difficult to maintain when considering the potential dimension of IoT and M2M scenarios. This thesis addresses the problem of storing and organizing context information from IoT/M2M scenarios in a meaningful way, without imposing a representation scheme or requiring a priori relations. This work proposes a d-dimension organization model, which was optimized for IoT/M2M data. The model relies on machine learning features to identify similar context sources. These features are then used to learn relations between data sources automatically, providing the foundations for automatic knowledge extraction, where machine learning, or even conventional methods, can rely upon to extract knowledge on a potentially relevant dataset. During this work, two different machine learning techniques were tackled: semantic and stream similarity. Semantic similarity estimates the similarity between concepts (in textual form). This thesis proposes an unsupervised learning method for semantic features based on distributional profiles, without requiring any specific corpus. This allows the organizational model to organize data based on concept similarity instead of string matching. Another advantage is that the learning method does not require input from users, making it ideal for massive IoT/M2M scenarios. Stream similarity metrics estimate the similarity between two streams of data. Although these methods have been extensively researched for DNA sequencing, they commonly rely on variants of the longest common sub-sequence. This PhD proposes a generative model for stream characterization, specially optimized for IoT/M2M data. The model can be used to generate statistically significant data’s streams and estimate the similarity between streams. This is then used by the context organization model to identify context sources with similar stream patterns. The work proposed in this thesis was extensively discussed, developed and published in several international publications. The multiple contributions in projects and collaborations with fellow colleagues, where parts of the work developed were used successfully, support the claim that although the context organization model (and subsequent similarity features) were optimized for IoT/M2M data, they can potentially be extended to deal with any kind of context information in a wide array of applications.O número crescente de dispositivos pequenos e baratos, repletos de capacidades sensoriais, criou uma nova fonte de dados que pode ser explorada para melhorar e otimizar vários sistemas, desde domótica em ambientes residenciais até aplicações de larga escala como monitorização agrícola, gestão de tráfego e manutenção preditiva a nível industrial. No entanto, este crescimento encontra-se emparelhado com a crescente dificuldade em recolher, armazenar e organizar todos estes dados. A inexistência de um esquema de representação padrão é uma das principais dificuldades nesta área. Além disso, métodos de extração de conhecimento convencionais dependem de representações padrão ou relações definidas a priori. No entanto estas relações são definidas e mantidas por utilizadores humanos. Embora seja viável para cenários de pequena escala ou áreas especificas, este tipo de relações torna-se cada vez mais difícil de manter quando se consideram cenários com a dimensão associado a IoT e M2M. Esta tese de doutoramento endereça o problema de armazenar e organizar informação de contexto de cenários de IoT/M2M, sem impor um esquema de representação ou relações a priori. Este trabalho propõe um modelo de organização com d dimensões, especialmente otimizado para dados de IoT/M2M. O modelo depende de características de machine learning para identificar fontes de contexto similares. Estas caracteristicas são utilizadas para aprender relações entre as fontes de dados automaticamente, criando as fundações para a extração de conhecimento automática. Quer machine learning quer métodos convencionais podem depois utilizar estas relações automáticas para extrair conhecimento em datasets potencialmente relevantes. Durante este trabalho, duas técnicas foram desenvolvidas: similaridade semântica e similaridade entre séries temporais. Similaridade semântica estima a similaridade entre conceitos (em forma textual). Este trabalho propõe um método de aprendizagem não supervisionado para features semânticas baseadas em perfis distributivos, sem exigir nenhum corpus específico. Isto permite ao modelo de organização organizar dados baseado em conceitos e não em similaridade de caracteres. Numa outra vantagem importante para os cenários de IoT/M2M, o método de aprendizagem não necessita de dados de entrada adicionados por utilizadores. A similaridade entre séries temporais são métricas que permitem estimar a similaridade entre várias series temporais. Embora estes métodos tenham sido extensivamente desenvolvidos para sequenciação de ADN, normalmente dependem de variantes de métodos baseados na maior sub-sequencia comum. Esta tese de doutoramento propõe um modelo generativo para caracterizar séries temporais, especialmente desenhado para dados IoT/M2M. Este modelo pode ser usado para gerar séries temporais estatisticamente corretas e estimar a similaridade entre múltiplas séries temporais. Posteriormente o modelo de organização identifica fontes de contexto com padrões temporais semelhantes. O trabalho proposto foi extensivamente discutido, desenvolvido e publicado em diversas publicações internacionais. As múltiplas contribuições em projetos e colaborações com colegas, onde partes trabalho desenvolvido foram utilizadas com sucesso, permitem reivindicar que embora o modelo (e subsequentes técnicas) tenha sido otimizado para dados IoT/M2M, podendo ser estendido para lidar com outros tipos de informação de contexto noutras áreas.The present study was developed in the scope of the Smart Green Homes Project [POCI-01-0247-FEDER-007678], a co-promotion between Bosch Termotecnologia S.A. and the University of Aveiro. It is financed by Portugal 2020 under the Competitiveness and Internationalization Operational Program, and by the European Regional Development Fund.Programa Doutoral em Informátic

    Formal Linguistic Models and Knowledge Processing. A Structuralist Approach to Rule-Based Ontology Learning and Population

    Get PDF
    2013 - 2014The main aim of this research is to propose a structuralist approach for knowledge processing by means of ontology learning and population, achieved starting from unstructured and structured texts. The method suggested includes distributional semantic approaches and NL formalization theories, in order to develop a framework, which relies upon deep linguistic analysis... [edited by author]XIII n.s

    Proceedings of the Conference on Natural Language Processing 2010

    Get PDF
    This book contains state-of-the-art contributions to the 10th conference on Natural Language Processing, KONVENS 2010 (Konferenz zur Verarbeitung natürlicher Sprache), with a focus on semantic processing. The KONVENS in general aims at offering a broad perspective on current research and developments within the interdisciplinary field of natural language processing. The central theme draws specific attention towards addressing linguistic aspects ofmeaning, covering deep as well as shallow approaches to semantic processing. The contributions address both knowledgebased and data-driven methods for modelling and acquiring semantic information, and discuss the role of semantic information in applications of language technology. The articles demonstrate the importance of semantic processing, and present novel and creative approaches to natural language processing in general. Some contributions put their focus on developing and improving NLP systems for tasks like Named Entity Recognition or Word Sense Disambiguation, or focus on semantic knowledge acquisition and exploitation with respect to collaboratively built ressources, or harvesting semantic information in virtual games. Others are set within the context of real-world applications, such as Authoring Aids, Text Summarisation and Information Retrieval. The collection highlights the importance of semantic processing for different areas and applications in Natural Language Processing, and provides the reader with an overview of current research in this field

    Computational approaches to semantic change (Volume 6)

    Get PDF
    Semantic change — how the meanings of words change over time — has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans

    Frames Featuring in Epidemiological Crisis Communication. A Frame-Semantic Analysis of Pandemic Crisis Communication in Multilingual Belgium

    Get PDF
    To date, frame-semantic theory has been applied to various domain-specific discourses, such as legal, economic, and even oenological discourses. Yet, epidemiological crisis communications form a domain-specific discourse tradition which has been left untouched by frame-semanticists. As such, we will conduct a descriptive pilot study which will consider some of the frames present in these texts. To this end, we collected a pilot corpus of Dutch COVID-19-related crisis communications from the Belgian government, which according to previous research (Liégeois, Mathysen 2022) can, in fact, be regarded as epidemiological crisis communications. More concretely, we considered the frames in which five terms – virus, coronavirus, COVID-19, epidemie and pandemie – inherent to this domain could occur and investigated the following three research questions: In which frames do our five target terms resurface within this domain- specific discourse tradition (RQ1)? Which functions do these frames fulfil within this domain-specific discourse tradition and can other domain-specific features (e.g., regarding the FEs of these frames) be found (RQ2)? Can these frames and their functions be linked back to the communicative strategies singled out by previous research on these Belgian epidemiological crisis communications (RQ3)

    LL(O)D and NLP perspectives on semantic change for humanities research

    Get PDF
    CC BY 4.0This paper presents an overview of the LL(O)D and NLP methods, tools and data for detecting and representing semantic change, with its main application in humanities research. The paper’s aim is to provide the starting point for the construction of a workflow and set of multilingual diachronic ontologies within the humanities use case of the COST Action Nexus Linguarum, European network for Web-centred linguistic data science, CA18209. The survey focuses on the essential aspects needed to understand the current trends and to build applications in this area of study
    corecore