22,705 research outputs found

    Integrating heterogeneous knowledges for understanding biological behaviors: a probabilistic approach

    Get PDF
    Despite recent molecular technique improvements, biological knowledge remains incomplete. Reasoning on living systems hence implies to integrate heterogeneous and partial informations. Although current investigations successfully focus on qualitative behaviors of macromolecular networks, others approaches show partial quantitative informations like protein concentration variations over times. We consider that both informations, qualitative and quantitative, have to be combined into a modeling method to provide a better understanding of the biological system. We propose here such a method using a probabilistic-like approach. After its exhaustive description, we illustrate its advantages by modeling the carbon starvation response in Escherichia coli. In this purpose, we build an original qualitative model based on available observations. After the formal verification of its qualitative properties, the probabilistic model shows quantitative results corresponding to biological expectations which confirm the interest of our probabilistic approach.Comment: 10 page

    Using Qualitative Hypotheses to Identify Inaccurate Data

    Full text link
    Identifying inaccurate data has long been regarded as a significant and difficult problem in AI. In this paper, we present a new method for identifying inaccurate data on the basis of qualitative correlations among related data. First, we introduce the definitions of related data and qualitative correlations among related data. Then we put forward a new concept called support coefficient function (SCF). SCF can be used to extract, represent, and calculate qualitative correlations among related data within a dataset. We propose an approach to determining dynamic shift intervals of inaccurate data, and an approach to calculating possibility of identifying inaccurate data, respectively. Both of the approaches are based on SCF. Finally we present an algorithm for identifying inaccurate data by using qualitative correlations among related data as confirmatory or disconfirmatory evidence. We have developed a practical system for interpreting infrared spectra by applying the method, and have fully tested the system against several hundred real spectra. The experimental results show that the method is significantly better than the conventional methods used in many similar systems.Comment: See http://www.jair.org/ for any accompanying file

    KR3^3: An Architecture for Knowledge Representation and Reasoning in Robotics

    Get PDF
    This paper describes an architecture that combines the complementary strengths of declarative programming and probabilistic graphical models to enable robots to represent, reason with, and learn from, qualitative and quantitative descriptions of uncertainty and knowledge. An action language is used for the low-level (LL) and high-level (HL) system descriptions in the architecture, and the definition of recorded histories in the HL is expanded to allow prioritized defaults. For any given goal, tentative plans created in the HL using default knowledge and commonsense reasoning are implemented in the LL using probabilistic algorithms, with the corresponding observations used to update the HL history. Tight coupling between the two levels enables automatic selection of relevant variables and generation of suitable action policies in the LL for each HL action, and supports reasoning with violation of defaults, noisy observations and unreliable actions in large and complex domains. The architecture is evaluated in simulation and on physical robots transporting objects in indoor domains; the benefit on robots is a reduction in task execution time of 39% compared with a purely probabilistic, but still hierarchical, approach.Comment: The paper appears in the Proceedings of the 15th International Workshop on Non-Monotonic Reasoning (NMR 2014

    From Observations to Hypotheses: Probabilistic Reasoning Versus Falsificationism and its Statistical Variations

    Full text link
    Testing hypotheses is an issue of primary importance in the scientific research, as well as in many other human activities. Much clarification about it can be achieved if the process of learning from data is framed in a stochastic model of causes and effects. Formulated with Poincare's words, the "essential problem of the experimental method" becomes then solving a "problem in the probability of causes", i.e. ranking the several hypotheses, that might be responsible for the observations, in credibility. This probabilistic approach to the problem (nowadays known as the Bayesian approach) differs from the standard (i.e. frequentistic) statistical methods of hypothesis tests. The latter methods might be seen as practical attempts of implementing the ideal of falsificationism, that can itself be viewed as an extension of the proof by contradiction of the classical logic to the experimental method. Some criticisms concerning conceptual as well as practical aspects of na\"\i ve falsificationism and conventional, frequentistic hypothesis tests are presented, and the alternative, probabilistic approach is outlined.Comment: 17 pages, 4 figures (V2 fixes some typos and adds a reference). Invited talk at the 2004 Vulcano Workshop on Frontier Objects in Astrophysics and Particle Physics, Vulcano (Italy) May 24-29, 2004. This paper and related work are also available at http://www.roma1.infn.it/~dagos/prob+stat.htm
    • …
    corecore