76 research outputs found

    Vers une utilisation pratique des règles implicatives

    No full text
    International audienceLes règles implicatives, qui permettent de modéliser des contraintes, restent méconnues. Nous proposons dans cet article des lignes directrices pour la conception pratique de systèmes de règles implicatives : construction de partitions adaptées, interprétation des résultats de l'inférence, coopération de plusieurs systèmes de règles et sémantique. La méthodologie est appliquée à un procédé agroalimentaire. / Implicatives rules, which model constraints remain unknown. We propose in this article guidelines for building practical systems of implicative rules : design of suitable partitions, interpretation of inference results, system cooperation and semantics. The methodology is applied to a food process

    Data reliability assessment in a data warehouse opened on the Web

    Get PDF
    International audienceThis paper presents an ontology-driven workflow that feeds and queries a data warehouse opened on the Web. Data are extracted from data tables in Web documents. As web documents are very heterogeneous in nature, a key issue in this workflow is the ability to assess the reliability of retrieved data. We first recall the main steps of our method to annotate and query Web data tables driven by a domain ontology. Then we propose an original method to assess Web data table reliability from a set of criteria by the means of evidence theory. Finally, we show how we extend the workflow to integrate the reliability assessment step

    A practical inference method with several implicative gradual rules and a fuzzy input: one and two dimensions

    Get PDF
    International audienceA general approach to practical inference with gradual implicative rules and fuzzy inputs is presented. Gradual rules represent constraints restricting outputs of a fuzzy system for each input. They are tailored for interpolative reasoning. Our approach to inference relies on the use of inferential independence. It is based on fuzzy output computation under an interval-valued input. A double decomposition of fuzzy inputs is done in terms of alpha-cuts and in terms of a partitioning of these cuts according to areas where only a few rules apply. The case of one and two dimensional inputs is consideredCet article présente une méthode d'inférence avec des règles implicatives graduelles pour une entrée floue. Les règles graduelles représentent des contraintes qui restreignent l'univers de sortie pour chacune des entrées. Elles sont conçues pour réaliser des interpolations. L'algorithme que nous proposons s'appuie sur le principe de indépendance inférentielle. Il met en oeuvre une double décomposition de l'ensemble flou d'entrée, par alpha-coupes et suivant le partitionnement de l'univers des variables d'entrée. Les cas étudiés correspondent à des systèmes à une et deux dimension

    EvaSylv: A user-friendly software to evaluate forestry scenarii including natural risk

    Get PDF
    Forest management relies on the evaluation of silviculture practices. The increase in natural risk due to climate change makes it necessary to consider evaluation criteria that take natural risk into account. Risk integration in existing software requires advanced programming skills.We propose a user-friendly software to simulate even-aged and monospecific forest at the stand level, in order to evaluate and optimize forest management. The software gives the possibility to run management scenarii with or without considering the impact of natural risk. The control variables are the dates and rates of thinning and the cutting age.The risk model is based on a Poisson processus. The Faustmann approach, including tree damage risk, is used to evaluate future benefits, economic or ecosystem services. It relies on the calculation of expected values, for which a dedicated mathematical development has been done. The optimized criteria used to evaluate the various scenarii are the Faustmann value and the Averaged yield value.We illustrate the approach and the software on two case studies: economic optimization of a beech stand and carbon sequestration optimization of a pine stand.Software interface makes it easy for users to write their own (growth-tree damage-economic) models without advanced programming skills. The possibility to run management scenarii with/without considering the impact of natural risk may contribute improving silviculture guidelines and adapting them to climate change. We propose future lines of research and improvement

    An iterative approach to build relevant ontology-aware data-driven models

    Get PDF
    knowledge integrationInternational audienceIn many fields involving complex environments or living organisms, data-driven models are useful to make simulations in order to extrapolate costly experiments and to design decision-support tools. Learning methods can be used to build interpretable models from data. However, to be really useful, such models must be trusted by their users. From this perspective, the domain expert knowledge can be collected and modelled to help guiding the learning process and to increase the confidence in the resulting models, as well as their relevance. Another issue is to design relevant ontologies to formalize complex knowledge. Interpretable predictive models can help in this matter. In this paper, we propose a generic iterative approach to design ontology-aware and relevant data-driven models. It is based upon an ontology to model the domain knowledge and a learning method to build the interpretable models (decision trees in this paper). Subjective and objective evaluations are both involved in the process. A case study in the domain of Food Industry demonstrates the interest of this approach

    Building an interpretable fuzzy rule base from data using Orthogonal Least Squares Application to a depollution problem

    Get PDF
    In many fields where human understanding plays a crucial role, such as bioprocesses, the capacity of extracting knowledge from data is of critical importance. Within this framework, fuzzy learning methods, if properly used, can greatly help human experts. Amongst these methods, the aim of orthogonal transformations, which have been proven to be mathematically robust, is to build rules from a set of training data and to select the most important ones by linear regression or rank revealing techniques. The OLS algorithm is a good representative of those methods. However, it was originally designed so that it only cared about numerical performance. Thus, we propose some modifications of the original method to take interpretability into account. After recalling the original algorithm, this paper presents the changes made to the original method, then discusses some results obtained from benchmark problems. Finally, the algorithm is applied to a real-world fault detection depollution problem.Comment: pre-print of final version published in Fuzzy Sets and System

    A decision support system for eco-efficient biorefinery process comparison using a semantic approach

    Get PDF
    Enzymatic hydrolysis of the main components of lignocellulosic biomass is one of the promising methods to further upgrading it into biofuels. Biomass pre-treatment is an essential step in order to reduce cellulose crystallinity, increase surface and porosity and separate the major constituents of biomass. Scientific literature in this domain is increasing fast and could be a valuable source of data. As these abundant scientific data are mostly in textual format and heterogeneously structured, using them to compute biomass pre-treatment efficiency is not straightforward. This paper presents the implementation of a Decision Support System (DSS) based on an original pipeline coupling knowledge engineering (KE) based on semantic web technologies, soft computing techniques and environmental factor computation. The DSS allows using data found in the literature to assess environmental sustainability of biorefinery systems. The pipeline permits to: (1) structure and integrate relevant experimental data, (2) assess data source reliability, (3) compute and visualize green indicators taking into account data imprecision and source reliability. This pipeline has been made possible thanks to innovative researches in the coupling of ontologies, uncertainty management and propagation. In this first version, data acquisition is done by experts and facilitated by a termino-ontological resource. Data source reliability assessment is based on domain knowledge and done by experts. The operational prototype has been used by field experts on a realistic use case (rice straw). The obtained results have validated the usefulness of the system. Further work will address the question of a higher automation level for data acquisition and data source reliability assessment
    • …
    corecore