74,940 research outputs found

    Automatic Failure Explanation in CPS Models

    Full text link
    Debugging Cyber-Physical System (CPS) models can be extremely complex. Indeed, only the detection of a failure is insuffcient to know how to correct a faulty model. Faults can propagate in time and in space producing observable misbehaviours in locations completely different from the location of the fault. Understanding the reason of an observed failure is typically a challenging and laborious task left to the experience and domain knowledge of the designer. \n In this paper, we propose CPSDebug, a novel approach that by combining testing, specification mining, and failure analysis, can automatically explain failures in Simulink/Stateflow models. We evaluate CPSDebug on two case studies, involving two use scenarios and several classes of faults, demonstrating the potential value of our approach

    Beyond Personalization: Research Directions in Multistakeholder Recommendation

    Full text link
    Recommender systems are personalized information access applications; they are ubiquitous in today's online environment, and effective at finding items that meet user needs and tastes. As the reach of recommender systems has extended, it has become apparent that the single-minded focus on the user common to academic research has obscured other important aspects of recommendation outcomes. Properties such as fairness, balance, profitability, and reciprocity are not captured by typical metrics for recommender system evaluation. The concept of multistakeholder recommendation has emerged as a unifying framework for describing and understanding recommendation settings where the end user is not the sole focus. This article describes the origins of multistakeholder recommendation, and the landscape of system designs. It provides illustrative examples of current research, as well as outlining open questions and research directions for the field.Comment: 64 page

    Recommendations for web service composition by mining usage logs

    Full text link
    Web service composition has been one of the most researched topics of the past decade. Novel methods of web service composition are being proposed in the literature include Semantics-based composition, WSDLbased composition. Although these methods provide promising results for composition, search and discovery of web service based on QoS parameter of network and semantics or ontology associated with WSDL, they do not address composition based on usage of web service. Web Service usage logs capture time series data of web service invocation by business objects, which innately captures patterns or workflows associated with business operations. Web service composition based on such patterns and workflows can greatly streamline the business operations. In this research work, we try to explore and implement methods of mining web service usage logs. Main objectives include Identifying usage association of services. Linking one service invocation with other, Evaluation of the causal relationship between associations of service

    Modeling controversies in the press: the case of the abnormal bees' death

    Full text link
    The controversy about the cause(s) of abnormal death of bee colonies in France is investigated through an extensive analysis of the french speaking press. A statistical analysis of textual data is first performed on the lexicon used by journalists to describe the facts and to present associated informations during the period 1998-2010. Three states are identified to explain the phenomenon. The first state asserts a unique cause, the second one focuses on multifactor causes and the third one states the absence of current proof. Assigning each article to one of the three states, we are able to follow the associated opinion dynamics among the journalists over 13 years. Then, we apply the Galam sequential probabilistic model of opinion dynamic to those data. Assuming journalists are either open mind or inflexible about their respective opinions, the results are reproduced precisely provided we account for a series of annual changes in the proportions of respective inflexibles. The results shed a new counter intuitive light on the various pressure supposed to apply on the journalists by either chemical industries or beekeepers and experts or politicians. The obtained dynamics of respective inflexibles shows the possible effect of lobbying, the inertia of the debate and the net advantage gained by the first whistleblowers.Comment: 22 pages, 9 figure

    Symbolic Methodology in Numeric Data Mining: Relational Techniques for Financial Applications

    Full text link
    Currently statistical and artificial neural network methods dominate in financial data mining. Alternative relational (symbolic) data mining methods have shown their effectiveness in robotics, drug design and other applications. Traditionally symbolic methods prevail in the areas with significant non-numeric (symbolic) knowledge, such as relative location in robot navigation. At first glance, stock market forecast looks as a pure numeric area irrelevant to symbolic methods. One of our major goals is to show that financial time series can benefit significantly from relational data mining based on symbolic methods. The paper overviews relational data mining methodology and develops this techniques for financial data mining.Comment: 20 pages, 1 figure, 16 table

    Finding Explanations of Entity Relatedness in Graphs: A Survey

    Full text link
    Analysing and explaining relationships between entities in a graph is a fundamental problem associated with many practical applications. For example, a graph of biological pathways can be used for discovering a previously unknown relationship between two proteins. Domain experts, however, may be reluctant to trust such a discovery without a detailed explanation as to why exactly the two proteins are deemed related in the graph. This paper provides an overview of the types of solutions, their associated methods and strategies, that have been proposed for finding entity relatedness explanations in graphs. The first type of solution relies on information inherent to the paths connecting the entities. This type of solution provides entity relatedness explanations in the form of a list of ranked paths. The rank of a path is measured in terms of importance, uniqueness, novelty and informativeness. The second type of solution relies on measures of node relevance. In this case, the relevance of nodes is measured w.r.t. the entities of interest, and relatedness explanations are provided in the form of a subgraph that maximises node relevance scores. This paper uses this classification of approaches to discuss and contrast some of the key concepts that guide different solutions to the problem of entity relatedness explanation in graphs.Comment: 10 pages, 9 Equations, Survey Pape

    The Grammar of Interactive Explanatory Model Analysis

    Full text link
    The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, which inevitably leads to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory interpretations of the same phenomenon. Surprisingly, the majority of methods developed for explainable machine learning focus on a single aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper presents how different Explanatory Model Analysis (EMA) methods complement each other and why it is essential to juxtapose them together. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe potential human-model dialogues. IEMA is implemented in the human-centered framework that adopts interactivity, customizability and automation as its main traits. Combined, these methods enhance the responsible approach to predictive modeling.Comment: 17 pages, 10 figures, 3 table

    A Generative Model of Software Dependency Graphs to Better Understand Software Evolution

    Full text link
    Software systems are composed of many interacting elements. A natural way to abstract over software systems is to model them as graphs. In this paper we consider software dependency graphs of object-oriented software and we study one topological property: the degree distribution. Based on the analysis of ten software systems written in Java, we show that there exists completely different systems that have the same degree distribution. Then, we propose a generative model of software dependency graphs which synthesizes graphs whose degree distribution is close to the empirical ones observed in real software systems. This model gives us novel insights on the potential fundamental rules of software evolution

    Explaining Scenarios for Information Personalization

    Full text link
    Personalization customizes information access. The PIPE ("Personalization is Partial Evaluation") modeling methodology represents interaction with an information space as a program. The program is then specialized to a user's known interests or information seeking activity by the technique of partial evaluation. In this paper, we elaborate PIPE by considering requirements analysis in the personalization lifecycle. We investigate the use of scenarios as a means of identifying and analyzing personalization requirements. As our first result, we show how designing a PIPE representation can be cast as a search within a space of PIPE models, organized along a partial order. This allows us to view the design of a personalization system, itself, as specialized interpretation of an information space. We then exploit the underlying equivalence of explanation-based generalization (EBG) and partial evaluation to realize high-level goals and needs identified in scenarios; in particular, we specialize (personalize) an information space based on the explanation of a user scenario in that information space, just as EBG specializes a theory based on the explanation of an example in that theory. In this approach, personalization becomes the transformation of information spaces to support the explanation of usage scenarios. An example application is described

    Embedding machine-readable proteins interactions data in scientific articles for easy access and retrieval

    Get PDF
    Extraction of protein-protein interactions data from scientific literature remains a hard, time- and resource-consuming task. This task would be greatly simplified by embedding in the source, i.e. research articles, a standardized, synthetic, machine-readable codification for protein-protein interactions data description, to make the identification and the retrieval of such very valuable information easier, faster, and more reliable than now.
We shortly discuss how this information can be easily encoded and embedded in research papers with the collaboration of authors and scientific publishers, and propose an online demonstrative tool that shows how to help and allow authors for the easy and fast conversion of such valuable biological data into an embeddable, accessible, computer-readable codification
    • …
    corecore