415 research outputs found

    Beyond probabilities: A possibilistic framework to interpret ensemble predictions and fuse imperfect sources of information

    Get PDF
    AbstractEnsemble forecasting is widely used in medium‐range weather predictions to account for the uncertainty that is inherent in the numerical prediction of high‐dimensional, nonlinear systems with high sensitivity to initial conditions. Ensemble forecasting allows one to sample possible future scenarios in a Monte‐Carlo‐like approximation through small strategical perturbations of the initial conditions, and in some cases stochastic parametrization schemes of the atmosphere–ocean dynamical equations. Results are generally interpreted in a probabilistic manner by turning the ensemble into a predictive probability distribution. Yet, due to model bias and dispersion errors, this interpretation is often not reliable and statistical postprocessing is needed to reach probabilistic calibration. This is all the more true for extreme events which, for dynamical reasons, cannot generally be associated with a significant density of ensemble members. In this work we propose a novel approach: a possibilistic interpretation of ensemble predictions, taking inspiration from possibility theory. This framework allows us to integrate in a consistent manner other imperfect sources of information, such as the insight about the system dynamics provided by the analogue method. We thereby show that probability distributions may not be the best way to extract the valuable information contained in ensemble prediction systems, especially for large lead times. Indeed, shifting to possibility theory provides more meaningful results without the need to resort to additional calibration, while maintaining or improving skills. Our approach is tested on an imperfect version of the Lorenz '96 model, and results for extreme event prediction are compared against those given by a standard probabilistic ensemble dressing

    Representing archaeological uncertainty in cultural informatics

    Get PDF
    This thesis sets out to explore, describe, quantify, and visualise uncertainty in a cultural informatics context, with a focus on archaeological reconstructions. For quite some time, archaeologists and heritage experts have been criticising the often toorealistic appearance of three-dimensional reconstructions. They have been highlighting one of the unique features of archaeology: the information we have on our heritage will always be incomplete. This incompleteness should be reflected in digitised reconstructions of the past. This criticism is the driving force behind this thesis. The research examines archaeological theory and inferential process and provides insight into computer visualisation. It describes how these two areas, of archaeology and computer graphics, have formed a useful, but often tumultuous, relationship through the years. By examining the uncertainty background of disciplines such as GIS, medicine, and law, the thesis postulates that archaeological visualisation, in order to mature, must move towards archaeological knowledge visualisation. Three sequential areas are proposed through this thesis for the initial exploration of archaeological uncertainty: identification, quantification and modelling. The main contributions of the thesis lie in those three areas. Firstly, through the innovative design, distribution, and analysis of a questionnaire, the thesis identifies the importance of uncertainty in archaeological interpretation and discovers potential preferences among different evidence types. Secondly, the thesis uniquely analyses and evaluates, in relation to archaeological uncertainty, three different belief quantification models. The varying ways that these mathematical models work, are also evaluated through simulated experiments. Comparison of results indicates significant convergence between the models. Thirdly, a novel approach to archaeological uncertainty and evidence conflict visualisation is presented, influenced by information visualisation schemes. Lastly, suggestions for future semantic extensions to this research are presented through the design and development of new plugins to a search engine

    Open and Closed Systems in Data Analysis Entropy.

    Get PDF
    This dissertation presents an efficient and integrated approach to data analysis. Entropy data analysis is an evolution of reconstructability analysis, which investigates the relationships between parts and wholes. The central theme of the dissertation is the development of a system to condense the information in a data set into a small number of parameters. The new system is called a k-system. k-system analysis goes beyond traditional data analysis in that it has the potentiality for changes to be made to a system; and the impact of these changes can be evaluated. k-system can be useful in designing and evaluating open and closed systems. The behavior of these systems is measured by the k(.) function. The k-system is a closed system. The concept of open comes about when we try to reconstruct the k-system starting with an empty system which contains no information. We add information until we are satisfied that this constructed system adequately reproduces the k-system. The dissertation will identify and isolate the mathematics in the k-system algorithms that determine an open or closed system. It will change the algorithms to offer the option of an open or closed system, and then implement these changes in the k-system algorithms for option of an open or closed system

    Identification of pore spaces in 3D CT soil images using a PFCM partitional clustering

    Get PDF
    Recent advances in non-destructive imaging techniques, such as X-ray computed tomography (CT), make it possible to analyse pore space features from the direct visualisation from soil structures. A quantitative characterisation of the three-dimensional solid-pore architecture is important to understand soil mechanics, as they relate to the control of biological, chemical, and physical processes across scales. This analysis technique therefore offers an opportunity to better interpret soil strata, as new and relevant information can be obtained. In this work, we propose an approach to automatically identify the pore structure of a set of 200-2D images that represent slices of an original 3D CT image of a soil sample, which can be accomplished through non-linear enhancement of the pixel grey levels and an image segmentation based on a PFCM (Possibilistic Fuzzy C-Means) algorithm. Once the solids and pore spaces have been identified, the set of 200-2D images is then used to reconstruct an approximation of the soil sample by projecting only the pore spaces. This reconstruction shows the structure of the soil and its pores, which become more bounded, less bounded, or unbounded with changes in depth. If the soil sample image quality is sufficiently favourable in terms of contrast, noise and sharpness, the pore identification is less complicated, and the PFCM clustering algorithm can be used without additional processing; otherwise, images require pre-processing before using this algorithm. Promising results were obtained with four soil samples, the first of which was used to show the algorithm validity and the additional three were used to demonstrate the robustness of our proposal. The methodology we present here can better detect the solid soil and pore spaces on CT images, enabling the generation of better 2D?3D representations of pore structures from segmented 2D images

    Unsupervised tracking of time-evolving data streams and an application to short-term urban traffic flow forecasting

    Get PDF
    I am indebted to many people for their help and support I receive during my Ph.D. study and research at DIBRIS-University of Genoa. First and foremost, I would like to express my sincere thanks to my supervisors Prof.Dr. Masulli, and Prof.Dr. Rovetta for the invaluable guidance, frequent meetings, and discussions, and the encouragement and support on my way of research. I thanks all the members of the DIBRIS for their support and kindness during my 4 years Ph.D. I would like also to acknowledge the contribution of the projects Piattaforma per la mobili\ue0 Urbana con Gestione delle INformazioni da sorgenti eterogenee (PLUG-IN) and COST Action IC1406 High Performance Modelling and Simulation for Big Data Applications (cHiPSet). Last and most importantly, I wish to thanks my family: my wife Shaimaa who stays with me through the joys and pains; my daughter and son whom gives me happiness every-day; and my parents for their constant love and encouragement

    A Software Architecture for Reconstructability Analysis

    Get PDF
    Software packages for reconstructability analysis (RA), as well as for related log linear modeling, generally provide a fixed set of functions. Such packages are suitable for end‐users applying RA in various domains, but do not provide a platform for research into the RA methods themselves. A new software system, Occam3, is being developed which is intended to address three goals which often conflict with one another to provide: a general and flexible infrastructure for experimentation with RA methods and algorithms; an easily‐configured system allowing methods to be combined in novel ways, without requiring deep software expertise; and a system which can be easily utilized by domain researchers who are not computer specialists. Meeting these goals has led to an architecture which strictly separates functions into three layers: the core, which provides representation of data sets, relations, and models; the management layer, which provides extensible objects for development of new algorithms; and the script layer, which allows the other facilities to be combined in novel ways to address a particular domain analysis problem

    A possibilistic framework for constraint-based metabolic flux analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Constraint-based models allow the calculation of the metabolic flux states that can be exhibited by cells, standing out as a powerful analytical tool, but they do not determine which of these are likely to be existing under given circumstances. Typical methods to perform these predictions are (a) flux balance analysis, which is based on the assumption that cell behaviour is optimal, and (b) metabolic flux analysis, which combines the model with experimental measurements.</p> <p>Results</p> <p>Herein we discuss a possibilistic framework to perform metabolic flux estimations using a constraint-based model and a set of measurements. The methodology is able to handle inconsistencies, by considering sensors errors and model imprecision, to provide rich and reliable flux estimations. The methodology can be cast as linear programming problems, able to handle thousands of variables with efficiency, so it is suitable to deal with large-scale networks. Moreover, the possibilistic estimation does not attempt necessarily to predict the actual fluxes with precision, but rather to exploit the available data – even if those are scarce – to distinguish possible from impossible flux states in a gradual way.</p> <p>Conclusion</p> <p>We introduce a possibilistic framework for the estimation of metabolic fluxes, which is shown to be flexible, reliable, usable in scenarios lacking data and computationally efficient.</p
    corecore