52,979 research outputs found

    Data granulation by the principles of uncertainty

    Full text link
    Researches in granular modeling produced a variety of mathematical models, such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets, which are all suitable to characterize the so-called information granules. Modeling of the input data uncertainty is recognized as a crucial aspect in information granulation. Moreover, the uncertainty is a well-studied concept in many mathematical settings, such as those of probability theory, fuzzy set theory, and possibility theory. This fact suggests that an appropriate quantification of the uncertainty expressed by the information granule model could be used to define an invariant property, to be exploited in practical situations of information granulation. In this perspective, a procedure of information granulation is effective if the uncertainty conveyed by the synthesized information granule is in a monotonically increasing relation with the uncertainty of the input data. In this paper, we present a data granulation framework that elaborates over the principles of uncertainty introduced by Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is possible to apply such principles regardless of the input data type and the specific mathematical setting adopted for the information granules. The proposed framework is conceived (i) to offer a guideline for the synthesis of information granules and (ii) to build a groundwork to compare and quantitatively judge over different data granulation procedures. To provide a suitable case study, we introduce a new data granulation technique based on the minimum sum of distances, which is designed to generate type-2 fuzzy sets. We analyze the procedure by performing different experiments on two distinct data types: feature vectors and labeled graphs. Results show that the uncertainty of the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference

    Resilience Assignment Framework using System Dynamics and Fuzzy Logic.

    Get PDF
    This paper is concerned with the development of a conceptual framework that measures the resilience of the transport network under climate change related events. However, the conceptual framework could be adapted and quantified to suit each disruption’s unique impacts. The proposed resilience framework evaluates the changes in transport network performance in multi-stage processes; pre, during and after the disruption. The framework will be of use to decision makers in understanding the dynamic nature of resilience under various events. Furthermore, it could be used as an evaluation tool to gauge transport network performance and highlight weaknesses in the network. In this paper, the system dynamics approach and fuzzy logic theory are integrated and employed to study three characteristics of network resilience. The proposed methodology has been selected to overcome two dominant problems in transport modelling, namely complexity and uncertainty. The system dynamics approach is intended to overcome the double counting effect of extreme events on various resilience characteristics because of its ability to model the feedback process and time delay. On the other hand, fuzzy logic is used to model the relationships among different variables that are difficult to express in numerical form such as redundancy and mobility

    On the dialog between experimentalist and modeler in catchment hydrology

    Get PDF
    The dialog between experimentalist and modeler in catchment hydrology has been minimal to date. The experimentalist often has a highly detailed yet highly qualitative understanding of dominant runoff processes—thus there is often much more information content on the catchment than we use for calibration of a model. While modelers often appreciate the need for 'hard data' for the model calibration process, there has been little thought given to how modelers might access this 'soft' or process knowledge. We present a new method where soft data (i.e., qualitative knowledge from the experimentalist that cannot be used directly as exact numbers) are made useful through fuzzy measures of model-simulation and parameter-value acceptability. We developed a three-box lumped conceptual model for the Maimai catchment in New Zealand, a particularly well-studied process-hydrological research catchment. The boxes represent the key hydrological reservoirs that are known to have distinct groundwater dynamics, isotopic composition and solute chemistry. The model was calibrated against hard data (runoff and groundwater-levels) as well as a number of criteria derived from the soft data (e.g. percent new water, reservoir volume, etc). We achieved very good fits for the three-box model when optimizing the parameter values with only runoff (Reff=0.93). However, parameter sets obtained in this way showed in general a poor goodness-of-fit for other criteria such as the simulated new-water contributions to peak runoff. Inclusion of soft-data criteria in the model calibration process resulted in lower Reff-values (around 0.84 when including all criteria) but led to better overall performance, as interpreted by the experimentalist’s view of catchment runoff dynamics. The model performance with respect to soft data (like, for instance, the new water ratio) increased significantly and parameter uncertainty was reduced by 60% on average with the introduction of the soft data multi-criteria calibration. We argue that accepting lower model efficiencies for runoff is 'worth it' if one can develop a more 'real' model of catchment behavior. The use of soft data is an approach to formalize this exchange between experimentalist and modeler and to more fully utilize the information content from experimental catchments

    Uncertainty Analysis of the Adequacy Assessment Model of a Distributed Generation System

    Full text link
    Due to the inherent aleatory uncertainties in renewable generators, the reliability/adequacy assessments of distributed generation (DG) systems have been particularly focused on the probabilistic modeling of random behaviors, given sufficient informative data. However, another type of uncertainty (epistemic uncertainty) must be accounted for in the modeling, due to incomplete knowledge of the phenomena and imprecise evaluation of the related characteristic parameters. In circumstances of few informative data, this type of uncertainty calls for alternative methods of representation, propagation, analysis and interpretation. In this study, we make a first attempt to identify, model, and jointly propagate aleatory and epistemic uncertainties in the context of DG systems modeling for adequacy assessment. Probability and possibility distributions are used to model the aleatory and epistemic uncertainties, respectively. Evidence theory is used to incorporate the two uncertainties under a single framework. Based on the plausibility and belief functions of evidence theory, the hybrid propagation approach is introduced. A demonstration is given on a DG system adapted from the IEEE 34 nodes distribution test feeder. Compared to the pure probabilistic approach, it is shown that the hybrid propagation is capable of explicitly expressing the imprecision in the knowledge on the DG parameters into the final adequacy values assessed. It also effectively captures the growth of uncertainties with higher DG penetration levels

    LOCALITY UNCERTAINTY AND THE DIFFERENTIAL PERFORMANCE OF FOUR COMMON NICHE-BASED MODELING TECHNIQUES

    Get PDF
    We address a poorly understood aspect of ecological niche modeling: its sensitivity to different levels of geographic uncertainty in organism occurrence data. Our primary interest was to assess how accuracy degrades under increasing uncertainty, with performance measured indirectly through model consistency. We used Monte Carlo simulations and a similarity measure to assess model sensitivity across three variables: locality accuracy, niche modeling method, and species. Randomly generated data sets with known levels of locality uncertainty were compared to an original prediction using Fuzzy Kappa. Data sets where locality uncertainty is low were expected to produce similar distribution maps to the original. In contrast, data sets where locality uncertainty is high were expected to produce less similar maps. BIOCLIM, DOMAIN, Maxent and GARP were used to predict the distributions for 1200 simulated datasets (3 species x 4 buffer sizes x 100 randomized data sets). Thus, our experimental design produced a total of 4800 similarity measures, with each of the simulated distributions compared to the prediction of the original data set and corresponding modeling method. A general linear model (GLM) analysis was performed which enables us to simultaneously measure the effect of buffer size, modeling method, and species, as well as interactions among all variables. Our results show that modeling method has the largest effect on similarity scores and uniquely accounts for 40% of the total variance in the model. The second most important factor was buffer size, but it uniquely accounts for only 3% of the variation in the model. The newer and currently more popular methods, GARP and Maxent, were shown to produce more inconsistent predictions than the earlier and simpler methods, BIOCLIM and DOMAIN. Understanding the performance of different niche modeling methods under varying levels of geographic uncertainty is an important step toward more productive applications of historical biodiversity collections
    • …
    corecore