31 research outputs found

    Linear filtering and mathematical morphology on an image: A bridge

    Get PDF
    International audienceIn this paper, we propose to show that a particular fuzzy extension of mathematical morphology coincides with a non-additive extension of linear filtering based on convolution kernels thus bridging the two approaches

    De l'utilisation des noyaux maxitifs en traitement de l'information

    No full text
    In this thesis, we propose and develop new methods in statistics and in signal and image processing based upon possibility theory. These new methods are adapted from usual data processing tools. They aim at handling the defects of the usual methods coming from the user's lack of knowledge in the modeling of the observed phenomenon. The precise, punctual outputs of the usual methods become interval, hence imprecise, outputs. The interval outputs thus obtained consistently reflect the arbitrariness in the choice of the parameters of the usual methods. Many algorithms in signal processing and in statistics use, more or less explicitly, the expectation operator associated to a probabilistic representation of the neighborhood of a point, which we call summative kernel. Thus, we group many data processing methods together under the name of summative extraction of information. Among these methods, there are measure modeling, linear filtering, sampling, interpolation and derivation processes of digital signals, probability density and cumulative distribution functions estimators,...As an alternative to the summative extraction method, we present the maxitive extraction of information that uses the Choquet integral operator associated to a possibilistic representation of the neighborhood of a point, which we call maxitive kernel. The lack of knowledge on the summative kernel is handled by the fact that a maxitive kernel encodes a family of summative kernels. Moreover, the interval output of the maxitive extraction method is the set of the punctual outputs of the summative extraction methods obtained with the summative kernels encoded by the chosen maxitive kernel. On top of this theoretical justification, we present a series of applications of the maxitive extraction method in statistics and signal processing, which constitutes a toolbox, left to be enriched and used on real cases.Dans cette thèse, nous proposons et développons de nouvelles méthodes en statistiques et en traitement du signal et des images basées sur la théorie des possibilités. Ces nouvelles méthodes sont des adaptations d'outils usuels de traitement d'information dont le but est de prendre en compte les défauts dus à la méconnaissance de l'utilisateur sur la modélisation du phénomène observé. Par cette adaptation, on passe de méthodes dont les sorties sont précises, ponctuelles, à des méthodes dont les sorties sont intervallistes et donc imprécises. Les intervalles produits reflètent, de façon cohérente, l'arbitraire dans le choix des paramètres lorsqu'une méthode classique est utilisée.Beaucoup d'algorithmes en traitement du signal ou en statistiques utilisent, de façon plus ou moins explicite, la notion d'espérance mathématique associée à une représentation probabiliste du voisinage d'un point, que nous appelons noyau sommatif. Nous regroupons ainsi, sous la dénomination d'extraction sommative d'informations, des méthodes aussi diverses que la modélisation de la mesure, le filtrage linéaire, les processus d'échantillonnage, de reconstruction et de dérivation d'un signal numérique, l'estimation de densité de probabilité et de fonction de répartition par noyau ou par histogramme,...Comme alternative à l'extraction sommative d'informations, nous présentons la méthode d'extraction maxitive d'informations qui utilise l'intégrale de Choquet associée à une représentation possibiliste du voisinage d'un point, que nous appelons noyau maxitif. La méconnaissance sur le noyau sommatif est prise en compte par le fait qu'un noyau maxitif représente une famille de noyaux sommatifs. De plus, le résultat intervalliste de l'extraction maxitive d'informations est l'ensemble des résultats ponctuels des extractions sommatives d'informations obtenues avec les noyaux sommatifs de la famille représentée par le noyau maxitif utilisé. En plus de cette justification théorique, nous présentons une série d'applications de l'extraction maxitive d'informations en statistiques et en traitement du signal qui constitue une boîte à outils à enrichir et à utiliser sur des cas réels

    Kriging and epistemic uncertainty : a critical discussion

    No full text
    International audienceGeostatistics is a branch of statistics dealing with spatial phenomena modelled by random functions. In particular, it is assumed that, under some wellchosen simplifying hypotheses of stationarity, this probabilistic model, i.e. the random function describing spatial dependencies, can be completely assessed from the dataset by the experts. Kriging is a method for estimating or predicting the spatial phenomenon at non sampled locations from this estimated random function. In the usual kriging approach, the data are precise and the assessment of the random function is mostly made at a glance by the experts (i.e. geostatisticians) from a thorough descriptive analysis of the dataset. However, it seems more realistic to assume that spatial data is tainted with imprecision due to measurement errors and that information is lacking to properly assess a unique random function model. Thus, it would be natural to handle epistemic uncertainty appearing in both data specification and random function estimation steps of the kriging methodology.Epistemic uncertainty consists of some meta-knowledge about the lack of information on data precision or on the model variability. The aim of this paper is to discuss the pertinence of the usual random function approach to model uncertainty in geostatistics, to survey the already existing attempts to introduce epistemic uncertainty in geostatistics and to propose some perspectives for developing new tractable methods that may handle this kind of uncertainty

    A fuzzy interval analysis approach to kriging with ill-known variogram and data

    No full text
    International audienceGeostatistics is a branch of statistics dealing with spatial phenomena. Kriging consists in estimating or predicting a spatial phenomenon at non-sampled locations from an estimated random function. It is assumed that, under some well-chosen simplifying hypotheses of stationarity, the probabilistic model, i.e. the random function describing spatial variability dependencies, can be completely assessed from the dataset. However, in the usual kriging approach, the choice of the random function is mostly made at a glance by the experts (i.e. geostatisticians), via the selection of a variogram from a thorough descriptive analysis of the dataset. Although information necessary to properly select a unique random function model seems to be partially lacking, geostatistics, in general, and the kriging methodology, in particular, does not account for the incompleteness of the information that seems to pervade the procedure. The paper proposes an approach to handle epistemic uncertainty appearing in the kriging methodology. On the one hand, the collected data may be tainted with errors that can be modelled by intervals or fuzzy intervals. On the other hand, the choice of parameter values for the theoretical variogram, an essential step, contains some degrees of freedom that are seldom acknowledged. In this paper, we propose to account for epistemic uncertainty pervading the variogram parameters, and possibly the dataset, and lay bare its impact on the kriging results, improving on previous attempts by Bardossy and colleagues in the late 1980s

    Integrated Mean Squared Error (AIMSE).

    No full text
    www.elsevier.com/locate/stapr

    Kriging with Ill-known variogram and data

    No full text
    International audienceKriging consists in estimating or predicting the spatial phenomenon at non sampled locations from an estimated random function. Although information necessary to properly select a unique random function model seems to be partially lacking, geostatistics in general, and the kriging methodology in particular, does not account for the incompleteness of the information that seems to pervade the procedure. On the one hand, the collected data may be tainted with errors that can be modelled by intervals or fuzzy intervals. On the other hand, the choice of parameter values for the theoretical variogram, an essential step, contains some degrees of freedom that is seldom acknowledged. In this paper we propose to account for epistemic uncertainty pervading the variogram parameters, and possibly the data set, by means of fuzzy interval uncertainty. We lay bare its impact on the kriging results, improving on previous attempts by Bardossy and colleagues in the late 1980’s
    corecore