442 research outputs found

    Aspects of dealing with imperfect data in temporal databases

    Get PDF
    In reality, some objects or concepts have properties with a time-variant or time-related nature. Modelling these kinds of objects or concepts in a (relational) database schema is possible, but time-variant and time-related attributes have an impact on the consistency of the entire database. Therefore, temporal database models have been proposed to deal with this. Time itself can be at the source of imprecision, vagueness and uncertainty, since existing time measuring devices are inherently imperfect. Accordingly, human beings manage time using temporal indications and temporal notions, which may contain imprecision, vagueness and uncertainty. However, the imperfection in human-used temporal indications is supported by human interpretation, whereas information systems need extraordinary support for this. Several proposals for dealing with such imperfections when modelling temporal aspects exist. Some of these proposals consider the basis of the system to be the conversion of the specificity of temporal notions between used temporal expressions. Other proposals consider the temporal indications in the used temporal expressions to be the source of imperfection. In this chapter, an overview is given, concerning the basic concepts and issues related to the modelling of time as such or in (relational) database models and the imperfections that may arise during or as a result of this modelling. Next to this, a novel and currently researched technique for handling some of these imperfections is presented

    Beyond probabilities: A possibilistic framework to interpret ensemble predictions and fuse imperfect sources of information

    Get PDF
    AbstractEnsemble forecasting is widely used in medium‐range weather predictions to account for the uncertainty that is inherent in the numerical prediction of high‐dimensional, nonlinear systems with high sensitivity to initial conditions. Ensemble forecasting allows one to sample possible future scenarios in a Monte‐Carlo‐like approximation through small strategical perturbations of the initial conditions, and in some cases stochastic parametrization schemes of the atmosphere–ocean dynamical equations. Results are generally interpreted in a probabilistic manner by turning the ensemble into a predictive probability distribution. Yet, due to model bias and dispersion errors, this interpretation is often not reliable and statistical postprocessing is needed to reach probabilistic calibration. This is all the more true for extreme events which, for dynamical reasons, cannot generally be associated with a significant density of ensemble members. In this work we propose a novel approach: a possibilistic interpretation of ensemble predictions, taking inspiration from possibility theory. This framework allows us to integrate in a consistent manner other imperfect sources of information, such as the insight about the system dynamics provided by the analogue method. We thereby show that probability distributions may not be the best way to extract the valuable information contained in ensemble prediction systems, especially for large lead times. Indeed, shifting to possibility theory provides more meaningful results without the need to resort to additional calibration, while maintaining or improving skills. Our approach is tested on an imperfect version of the Lorenz '96 model, and results for extreme event prediction are compared against those given by a standard probabilistic ensemble dressing

    Combining Coordination and Organisation Mechanisms for the Development of a Dynamic Context-aware Information System Personalised by means of Logic-based Preference Methods

    Get PDF
    The general objective of this thesis is to enhance current ICDs by developing a personalised information system stable over dynamic and open environments, by adapting the behaviour to different situations, and handle user preferences in order to effectively provide the content (by means of a composition of several information services) the user is waiting for. Thus, the system combines two different usage contexts: the adaptive behaviour, in which the system adapts to unexpected events (e.g., the sudden failure of a service selected as information source), and the information customisation, in which the system proactively personalises a list of suggestions by considering user’s context and preferences

    Naive possibilistic classifiers for imprecise or uncertain numerical data

    Get PDF
    International audienceIn real-world problems, input data may be pervaded with uncertainty. In this paper, we investigate the behavior of naive possibilistic classifiers, as a counterpart to naive Bayesian ones, for dealing with classification tasks in the presence of uncertainty. For this purpose, we extend possibilistic classifiers, which have been recently adapted to numerical data, in order to cope with uncertainty in data representation. Here the possibility distributions that are used are supposed to encode the family of Gaussian probabilistic distributions that are compatible with the considered dataset. We consider two types of uncertainty: (i) the uncertainty associated with the class in the training set, which is modeled by a possibility distribution over class labels, and (ii) the imprecision pervading attribute values in the testing set represented under the form of intervals for continuous data. Moreover, the approach takes into account the uncertainty about the estimation of the Gaussian distribution parameters due to the limited amount of data available. We first adapt the possibilistic classification model, previously proposed for the certain case, in order to accommodate the uncertainty about class labels. Then, we propose an algorithm based on the extension principle to deal with imprecise attribute values. The experiments reported show the interest of possibilistic classifiers for handling uncertainty in data. In particular, the probability-to-possibility transform-based classifier shows a robust behavior when dealing with imperfect data

    Indoor/outdoor navigation system based on possibilistic traversable area segmentation for visually impaired people

    Get PDF
    Autonomous collision avoidance for visually impaired people requires a specific processing for an accurate definition of traversable area. Processing of a real time image sequence for traversable area segmentation is quite mandatory. Low cost systems suggest use of poor quality cameras. However, real time low cost camera suffers from great variability of traversable area appearance at indoor as well as outdoor environments. Taking into account ambiguity affecting object and traversable area appearance induced by reflections, illumination variations, occlusions (, etc...), an accurate segmentation of traversable area in such conditions remains a challenge. Moreover, indoor and outdoor environments add additional variability to traversable areas. In this paper, we present a real-time approach for fast traversable area segmentation from image sequence recorded by a low-cost monocular camera for navigation system. Taking into account all kinds of variability in the image, we apply possibility theory for modeling information ambiguity. An efficient way of updating the traversable area model in each environment condition is to consider traversable area samples from the same processed image for building its possibility maps. Then fusing these maps allows making a fair model definition of the traversable area. Performance of the proposed system was evaluated on public databases, with indoor and outdoor environments. Experimental results show that this method is challenging leading to higher segmentation rates

    EMPIRICAL COMPARISON OF METHODS FOR THE HIERARCHICAL PROPAGATION OF HYBRID UNCERTAINTY IN RISK ASSESSMENT, IN PRESENCE OF DEPENDENCES

    No full text
    Risk analysis models describing aleatory (i.e., random) events contain parameters (e.g., probabilities, failure rates, ...) that are epistemically-uncertain, i.e., known with poor precision. Whereas aleatory uncertainty is always described by probability distributions, epistemic uncertainty may be represented in different ways (e.g., probabilistic or possibilistic), depending on the information and data available. The work presented in this paper addresses the issue of accounting for (in)dependence relationships between epistemically-uncertain parameters. When a probabilistic representation of epistemic uncertainty is considered, uncertainty propagation is carried out by a two-dimensional (or double) Monte Carlo (MC) simulation approach; instead, when possibility distributions are used, two approaches are undertaken: the hybrid MC and Fuzzy Interval Analysis (FIA) method and the MC-based Dempster-Shafer (DS) approach employing Independent Random Sets (IRSs). The objectives are: i) studying the effects of (in)dependence between the epistemically-uncertain parameters of the aleatory probability distributions (when a probabilistic/possibilistic representation of epistemic uncertainty is adopted) and ii) studying the effect of the probabilistic/possibilistic representation of epistemic uncertainty (when the state of dependence between the epistemic parameters is defined). The Dependency Bound Convolution (DBC) approach is then undertaken within a hierarchical setting of hybrid (probabilistic and possibilistic) uncertainty propagation, in order to account for all kinds of (possibly unknown) dependences between the random variables. The analyses are carried out with reference to two toy examples, built in such a way to allow performing a fair quantitative comparison between the methods, and evaluating their rationale and appropriateness in relation to risk analysis

    Approximate Reasoning in Hydrogeological Modeling

    Get PDF
    The accurate determination of hydraulic conductivity is an important element of successful groundwater flow and transport modeling. However, the exhaustive measurement of this hydrogeological parameter is quite costly and, as a result, unrealistic. Alternatively, relationships between hydraulic conductivity and other hydrogeological variables less costly to measure have been used to estimate this crucial variable whenever needed. Until this point, however, the majority of these relationships have been assumed to be crisp and precise, contrary to what intuition dictates. The research presented herein addresses the imprecision inherent in hydraulic conductivity estimation, framing this process in a fuzzy logic framework. Because traditional hydrogeological practices are not suited to handle fuzzy data, various approaches to incorporating fuzzy data at different steps in the groundwater modeling process have been previously developed. Such approaches have been both redundant and contrary at times, including multiple approaches proposed for both fuzzy kriging and groundwater modeling. This research proposes a consistent rubric for the handling of fuzzy data throughout the entire groundwater modeling process. This entails the estimation of fuzzy data from alternative hydrogeological parameters, the sampling of realizations from fuzzy hydraulic conductivity data, including, most importantly, the appropriate aggregation of expert-provided fuzzy hydraulic conductivity estimates with traditionally-derived hydraulic conductivity measurements, and utilization of this information in the numerical simulation of groundwater flow and transport

    A possibilistic framework for constraint-based metabolic flux analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Constraint-based models allow the calculation of the metabolic flux states that can be exhibited by cells, standing out as a powerful analytical tool, but they do not determine which of these are likely to be existing under given circumstances. Typical methods to perform these predictions are (a) flux balance analysis, which is based on the assumption that cell behaviour is optimal, and (b) metabolic flux analysis, which combines the model with experimental measurements.</p> <p>Results</p> <p>Herein we discuss a possibilistic framework to perform metabolic flux estimations using a constraint-based model and a set of measurements. The methodology is able to handle inconsistencies, by considering sensors errors and model imprecision, to provide rich and reliable flux estimations. The methodology can be cast as linear programming problems, able to handle thousands of variables with efficiency, so it is suitable to deal with large-scale networks. Moreover, the possibilistic estimation does not attempt necessarily to predict the actual fluxes with precision, but rather to exploit the available data – even if those are scarce – to distinguish possible from impossible flux states in a gradual way.</p> <p>Conclusion</p> <p>We introduce a possibilistic framework for the estimation of metabolic fluxes, which is shown to be flexible, reliable, usable in scenarios lacking data and computationally efficient.</p
    • 

    corecore