9 research outputs found

    Automatic Emergence Detection in Complex Systems

    Get PDF

    On a Framework for the Prediction and Explanation of Changing Opinions

    Get PDF
    Abstract-One of the greatest challenges in accurately modeling a human system is the integration of dynamic, fine-grained information in a meaningful way. A model must allow for reasoning in the face of uncertain and incomplete information and be able to provide an easy to understand explanation of why the system is behaving as it is. To date, work in multi-agent systems has failed to come close to capturing these critical elements. Much of the problem is due the fact that most theories about the behavior of such a system are not computational in nature, they come from the social sciences. It is very difficult to successfully get from these qualitative social theories to meaningful computational models of the same phenomena. We focus on analysis of human populations where discerning the opinions of the members of the populace is integral in understanding behavior on an individual and group level. Our approach allows the easy aggregation and de-aggregation of information from multiple sources and in multiple data types into a unified model. We also present an algorithm that can be used to automatically detect the variables in the model that are causing changes in opinion over time. This gives our model the capability to explain why swings in opinion may be experienced in a principled, computational manner. An example is given based on the 2008 South Carolina Democratic Primary election. We show that our model is able to provide both predictions of how the population may vote and why they are voting this way. Our results compare favorably with the election results and our explanation of the changing trends compares favorably with the explanations given by experts

    Utilizing Bayesian Techniques for User Interface Intelligence

    Get PDF
    The purpose of this research is to study the injection of an intelligent agent into modern user interface technology. This agent is intended to manage the complex interactions between the software system and the user, thus making the complexities transparent to the user. The background study will show that while interesting and promising research exists in the domain of intelligent interface agents, very little research has been published that indicates true success in representing the uncertainty involved in predicting user intent. The interface agent architecture presented in this thesis will offer one solution for solving the problem using a newly developed Bayesian-based agent called the Intelligent Interface Agent (IIA). The proof of concept of this architecture has been implemented in an actual expert system, and this thesis presents the results of the implementation. The conclusions of this thesis will show the viability of this new agent architecture, as well as promising future research in examination of cognitive models, development of an intelligent interface agent interaction language, expansion of meta-level interface learning, and refinement of the PESKI user interface

    Context Reasoning for Role-Based Models

    Get PDF
    In a modern world software systems are literally everywhere. These should cope with very complex scenarios including the ability of context-awareness and self-adaptability. The concept of roles provide the means to model such complex, context-dependent systems. In role-based systems, the relational and context-dependent properties of objects are transferred into the roles that the object plays in a certain context. However, even if the domain can be expressed in a well-structured and modular way, role-based models can still be hard to comprehend due to the sophisticated semantics of roles, contexts and different constraints. Hence, unintended implications or inconsistencies may be overlooked. A feasible logical formalism is required here. In this setting Description Logics (DLs) fit very well as a starting point for further considerations since as a decidable fragment of first-order logic they have both an underlying formal semantics and decidable reasoning problems. DLs are a well-understood family of knowledge representation formalisms which allow to represent application domains in a well-structured way by DL-concepts, i.e. unary predicates, and DL-roles, i.e. binary predicates. However, classical DLs lack expressive power to formalise contextual knowledge which is crucial for formalising role-based systems. We investigate a novel family of contextualised description logics that is capable of expressing contextual knowledge and preserves decidability even in the presence of rigid DL-roles, i.e. relational structures that are context-independent. For these contextualised description logics we thoroughly analyse the complexity of the consistency problem. Furthermore, we present a mapping algorithm that allows for an automated translation from a formal role-based model, namely a Compartment Role Object Model (CROM), into a contextualised DL ontology. We prove the semantical correctness and provide ideas how features extending CROM can be expressed in our contextualised DLs. As final step for a completely automated analysis of role-based models, we investigate a practical reasoning algorithm and implement the first reasoner that can process contextual ontologies

    Combating Fake News: A Gravity Well Simulation to Model Echo Chamber Formation In Social Media

    Get PDF
    Fake news has become a serious concern as distributing misinformation has become easier and more impactful. A solution is critically required. One solution is to ban fake news, but that approach could create more problems than it solves, and would also be problematic from the beginning, as it must first be identified to be banned. We initially propose a method to automatically recognize suspected fake news, and to provide news consumers with more information as to its veracity. We suggest that fake news is comprised of two components: premises and misleading content. Fake news can be condensed down to a collection of premises, which may or may not be true, and to various forms of misleading material, including biased arguments and language, misdirection, and manipulation. Misleading content can then be exposed. While valuable, this framework’s utility may be limited by artificial intelligence, which can be used to alter fake news strategies at a rate exceeding the ability to update the framework. Therefore, we propose a model for identifying echo chambers, which are widely reported to be havens for fake news producers and consumers. We simulate a social media interest group as a gravity well, through which we identify the online groups postured to become echo chambers, and thus a source for fake news consumption and replication. This echo chamber model is supported by three pillars related to the social media group: technology employed, topic explored, and confirmation bias of group members. The model is validated by modeling and analyzing 19 subreddits on the Reddit social media platform. Contributions include a working definition for fake news, a framework for recognizing fake news, a generic model for social media echo chambers including three pillars central to echo chamber formation, and a gravity well simulation for social media groups, implemented for 19 subreddits

    Decomposable log-linear models

    Get PDF

    Application of DEA in benchmarking: a systematic literature review from 2003–2020

    Get PDF
    Benchmarking is an effective method for organizations to increase their productivity, quality of products, reliability of processes or services. The organization may make a comparison between its performance and that of the peers from benchmarking, and recognize their advantages as well as disadvantages. The main objective of the present systematic literature review has been the study of DEA benchmarking process. Therefore, it examined and gave a summary of various DEA models applied worldwide to improve benchmarking. Accordingly, a list of published academic papers that appeared in high-ranking journals between 2003 and February 2020 was collected for a systematic review of the DEA benchmarking application. Consequently, the papers selected have been classified according to year of publication, purpose of research, outcomes and results. This study has identified eight major applications including: transportation, service sector, product planning, maintenance, hotel industry, education, distribution and environmental factors. They take up a total of 82% of all application-embedded papers. Among all the applications, the highest recent development has been in both the transportation and service sectors. Results showed higher potential of DEA as a suitable evaluation method for the further benchmarking researches, wherein the production feature between outputs and inputs has been practically lacked or very hard to obtain. First published online 4 January 202

    Die erkenntnistheoretischen Grundlagen induktiven Schließens

    Get PDF
    Im Buch werden verschiedene Ansätze zum induktiven Schließen vorgestellt und daraufhin analysiert, welche Erfolgsaussichten sie jeweils bieten, die erkenntnistheoretischen Ziele der Wissenschaften zu erreichen. Dabei werden u.a. die konsverativen Induktionsschlüsse, Falsifikationsverfahren und die eliminative Induktion, der Schluss auf die beste Erklärung und vor allem der Bayesianismus besprochen. Außerdem geht es um die Verfahren der klassischen Statistik sowie moderne Verfahren des kausalen Schließens. Dazu wird ein erkenntnistheoretischer Rahmen angegeben, in dem die verschiedenen Begründungsverfahren untereinander verglichen werden
    corecore