18 research outputs found

    Behavioural model fusion

    Full text link

    Matching and Merging of Variant Feature Specifications

    Full text link

    Construction of a disaster-support dynamic knowledge chatbot

    Get PDF
    This dissertation is aimed at devising a disaster-support chatbot system with the capacity to enhance citizens and first responders’ resilience in disaster scenarios, by gathering and processing information from crowd-sensing sources, and informing its users with relevant knowledge about detected disasters, and how to deal with them. This system is composed of two artifacts that interact via a mediator graph-structured knowledge base. Our first artifact is a crowd-sourced disaster-related knowledge extraction system, which uses social media as a means to exploit humans behaving as sensors. It consists in a pipeline of natural language processing (NLP) tools, and a mixture of convolutional neural networks (CNNs) and lexicon-based models for classifying and extracting disasters. It then outputs the extracted information to the knowledge graph (KG), for presenting connected insights. The second artifact, the disaster-support chatbot, uses a state-of-the-art Dual Intent Entity Transformer (DIET) architecture to classify user intents, and makes use of several dialogue policies for managing user conversations, as well as storing relevant information to be used in further dialogue turns. To generate responses, the chatbot uses local and official disaster-related knowledge, and infers the knowledge graph for dynamic knowledge extracted by the first artifact. According to the achieved results, our devised system is on par with the state-of-the- art on Disaster Extraction systems. Both artifacts have also been validated by field specialists, who have considered them to be valuable assets in disaster-management.Esta dissertação visa a conceção de um sistema de chatbot de apoio a desastres, com a capacidade de aumentar a resiliência dos cidadãos e socorristas nestes cenários, através da recolha e processamento de informação de fontes de crowdsensing, e informar os seus utilizadores com conhecimentos relevantes sobre os desastres detetados, e como lidar com eles. Este sistema é composto por dois artefactos que interagem através de uma base de conhecimento baseada em grafos. O primeiro artefacto é um sistema de extração de conhecimento relacionado com desastres, que utiliza redes sociais como forma de explorar o conceito humans as sensors. Este artefacto consiste numa sequência de ferramentas de processamento de língua natural, e uma mistura de redes neuronais convolucionais e modelos baseados em léxicos, para classificar e extrair informação sobre desastres. A informação extraída é então passada para o grafo de conhecimento. O segundo artefacto, o chatbot de apoio a desastres, utiliza uma arquitetura Dual Intent Entity Transformer (DIET) para classificar as intenções dos utilizadores, e faz uso de várias políticas de diálogo para gerir as conversas, bem como armazenar informação chave. Para gerar respostas, o chatbot utiliza conhecimento local relacionado com desastres, e infere o grafo de conhecimento para extrair o conhecimento inserido pelo primeiro artefacto. De acordo com os resultados alcançados, o nosso sistema está ao nível do estado da arte em sistemas de extração de informação sobre desastres. Ambos os artefactos foram também validados por especialistas da área, e considerados um contributo significativo na gestão de desastres

    Policy-Driven Framework for Static Identification and Verification of Component Dependencies

    Get PDF
    Software maintenance is considered to be among the most difficult, lengthy and costly parts of a software application's life-cycle. Regardless of the nature of the software application and the software engineering efforts to reduce component coupling to minimum, dependencies between software components in applications will always exist and initiate software maintenance operations as they tend to threaten the "health" of the software system during the evolution of particular components. The situation is more serious with modern technologies and development paradigms, such as Service Oriented Architecture Systems and Cloud Computing that introduce larger software systems that consist of a substantial number of components which demonstrate numerous types of dependencies with each other. This work proposes a reference architecture and a corresponding software framework that can be used to model the dependencies between components in software systems and can support the verification of a set of policies that are derived from system dependencies and are relative to the software maintenance operations being applied. Dependency modelling is performed using configuration information from the system, as well as information harvested from component interface descriptions. The proposed approach has been applied to a medium scale SOA system, namely the SCA Travel Sample from Apache Software Foundation, and has been evaluated for performance in a configuration specification related to a simulated SOA system consisting to up to a thousand web services offered in a few hundred components

    Composite Modeling based on Distributed Graph Transformation and the Eclipse Modeling Framework

    Get PDF
    Model-driven development (MDD) has become a promising trend in software engineering for a number of reasons. Models as the key artifacts help the developers to abstract from irrelevant details, focus on important aspects of the underlying domain, and thus master complexity. As software systems grow, models may grow as well and finally become possibly too large to be developed and maintained in a comprehensible way. In traditional software development, the complexity of software systems is tackled by dividing the system into smaller cohesive parts, so-called components, and let distributed teams work on each concurrently. The question arises how this strategy can be applied to model-driven development. The overall aim of this thesis is to develop a formalized modularization concept to enable the structured and largely independent development of interrelated models in larger teams. To this end, this thesis proposes component models with explicit export and import interfaces where exports declare what is provided while imports declare what it needed. Then, composite model can be connected by connecting their compatible export and import interfaces yielding so-called composite models. Suitable to composite models, a transformation approach is developed which allows to describe changes over the whole composition structure. From the practical point of view, this concept especially targets models based on the Eclipse Modeling Framework (EMF). In the modeling community, EMF has evolved to a very popular framework which provides modeling and code generation facilities for Java applications based on structured data models. Since graphs are a natural way to represent the underlying structure of visual models, the formalization is based on graph transformation. Incorporated concepts according to distribution heavily rely on distributed graph transformation introduced by Taentzer. Typed graphs with inheritance and containment structures are well suited to describe the essentials of EMF models. However, they also induce a number of constraints like acyclic inheritance and containment which have to be taken into account. The category-theoretical foundation in this thesis allows for the precise definition of consistent composite graph transformations satisfying all inheritance and containment conditions. The composite modeling approach is shown to be coherent with the development of tool support for composite EMF models and composite EMF model transformation

    Where\u27s the Beef? Masculinity, Gender and Violence in Food Advertising

    Get PDF
    A thesis presented to the faculty of the Caudill College of Arts, Humanities, and Social Sciences at Morehead State University in partial fulfillment of the requirements for the Degree Master of Arts by Anne McNutt Patrick on April 24, 2018

    Global Consistency Checking of Distributed Models with TReMer

    No full text
    We present TReMer+, a tool for consistency checking of distributed models (i.e., models developed by distributed teams). TReMer+ works by first constructing a merged model before checking consistency. This enables a flexible way of verifying global consistency properties that is not possible with other existing tools. 1

    Confirmation and Evidence

    Get PDF
    The question how experience acts on our beliefs and how beliefs are changed in the light of experience is one of the oldest and most controversial questions in philosophy in general and epistemology in particular. Philosophy of science has replaced this question by the more specific enquiry how results of experiments act on scientific hypotheses and theories. Why do we maintain some theories while discarding others? Two general questions emerge: First, what is our reason to accept the justifying power of experience and more specifically, scientific experiments? Second, how can the relationship between theory and evidence be described and under which circumstances is a scientific theory confirmed by a piece of evidence? The book focuses on the second question, on explicating the relationship between theory and evidence and capturing the structure of a valid inductive argument. Special attention is paid to statistical applications that are prevalent in modern empirical science. After an introductory chapter about the link between confirmation and induction, the project starts with discussing qualitative accounts of confirmation in first-order predicate logic. Two major approaches, the Hempelian satisfaction criterion and the hypothetico-deductivist tradition, are contrasted to each other. This is subsequently extended to an account of the confirmation of entire theories as opposed to the confirmation of single hypothesis. Then the quantative Bayesian account of confirmation is explained and discussed on the basis of a theory of rational degrees of belief. After that, I present the various schools of statistical inference and explain the foundations of these competing schemes. Finally, I argue for a specific concept of statistical evidence, summarize the results, and sketch some open questions. </p
    corecore