1,354 research outputs found
Relational Hidden Variables and Non-Locality
We use a simple relational framework to develop the key notions and results
on hidden variables and non-locality. The extensive literature on these topics
in the foundations of quantum mechanics is couched in terms of probabilistic
models, and properties such as locality and no-signalling are formulated
probabilistically. We show that to a remarkable extent, the main structure of
the theory, through the major No-Go theorems and beyond, survives intact under
the replacement of probability distributions by mere relations.Comment: 42 pages in journal style. To appear in Studia Logic
Uncertainty Analysis of the Adequacy Assessment Model of a Distributed Generation System
Due to the inherent aleatory uncertainties in renewable generators, the
reliability/adequacy assessments of distributed generation (DG) systems have
been particularly focused on the probabilistic modeling of random behaviors,
given sufficient informative data. However, another type of uncertainty
(epistemic uncertainty) must be accounted for in the modeling, due to
incomplete knowledge of the phenomena and imprecise evaluation of the related
characteristic parameters. In circumstances of few informative data, this type
of uncertainty calls for alternative methods of representation, propagation,
analysis and interpretation. In this study, we make a first attempt to
identify, model, and jointly propagate aleatory and epistemic uncertainties in
the context of DG systems modeling for adequacy assessment. Probability and
possibility distributions are used to model the aleatory and epistemic
uncertainties, respectively. Evidence theory is used to incorporate the two
uncertainties under a single framework. Based on the plausibility and belief
functions of evidence theory, the hybrid propagation approach is introduced. A
demonstration is given on a DG system adapted from the IEEE 34 nodes
distribution test feeder. Compared to the pure probabilistic approach, it is
shown that the hybrid propagation is capable of explicitly expressing the
imprecision in the knowledge on the DG parameters into the final adequacy
values assessed. It also effectively captures the growth of uncertainties with
higher DG penetration levels
Land cover classification using fuzzy rules and aggregation of contextual information through evidence theory
Land cover classification using multispectral satellite image is a very
challenging task with numerous practical applications. We propose a multi-stage
classifier that involves fuzzy rule extraction from the training data and then
generation of a possibilistic label vector for each pixel using the fuzzy rule
base. To exploit the spatial correlation of land cover types we propose four
different information aggregation methods which use the possibilistic class
label of a pixel and those of its eight spatial neighbors for making the final
classification decision. Three of the aggregation methods use Dempster-Shafer
theory of evidence while the remaining one is modeled after the fuzzy k-NN
rule. The proposed methods are tested with two benchmark seven channel
satellite images and the results are found to be quite satisfactory. They are
also compared with a Markov random field (MRF) model-based contextual
classification method and found to perform consistently better.Comment: 14 pages, 2 figure
Analysis of fuzzy clustering and a generic fuzzy rule-based image segmentation technique
Many fuzzy clustering based techniques when applied to image segmentation do not incorporate spatial relationships of the pixels, while fuzzy rule-based image segmentation techniques are generally application dependent. Also for most of these techniques, the structure of the membership functions is predefined and parameters have to either automatically or manually derived. This paper addresses some of these issues by introducing a new generic fuzzy rule based image segmentation (GFRIS) technique, which is both application independent and can incorporate the spatial relationships of the pixels as well. A qualitative comparison is presented between the segmentation results obtained using this method and the popular fuzzy c-means (FCM) and possibilistic c-means (PCM) algorithms using an empirical discrepancy method. The results demonstrate this approach exhibits significant improvements over these popular fuzzy clustering algorithms for a wide range of differing image types
Application of spectral and spatial indices for specific class identification in Airborne Prism EXperiment (APEX) imaging spectrometer data for improved land cover classification
Hyperspectral remote sensing's ability to capture spectral information of targets in very narrow bandwidths gives rise to many intrinsic applications. However, the major limiting disadvantage to its applicability is its dimensionality, known as the Hughes Phenomenon. Traditional classification and image processing approaches fail to process data along many contiguous bands due to inadequate training samples. Another challenge of successful classification is to deal with the real world scenario of mixed pixels i.e. presence of more than one class within a single pixel. An attempt has been made to deal with the problems of dimensionality and mixed pixels, with an objective to improve the accuracy of class identification. In this paper, we discuss the application of indices to cope with the disadvantage of the dimensionality of the Airborne Prism EXperiment (APEX) hyperspectral Open Science Dataset (OSD) and to improve the classification accuracy using the Possibilistic c–Means (PCM) algorithm. This was used for the formulation of spectral and spatial indices to describe the information in the dataset in a lesser dimensionality. This reduced dimensionality is used for classification, attempting to improve the accuracy of determination of specific classes. Spectral indices are compiled from the spectral signatures of the target and spatial indices have been defined using texture analysis over defined neighbourhoods. The classification of 20 classes of varying spatial distributions was considered in order to evaluate the applicability of spectral and spatial indices in the extraction of specific class information. The classification of the dataset was performed in two stages; spectral and a combination of spectral and spatial indices individually as input for the PCM classifier. In addition to the reduction of entropy, while considering a spectral-spatial indices approach, an overall classification accuracy of 80.50% was achieved, against 65% (spectral indices only) and 59.50% (optimally determined principal component
A note on the effectiveness of some de-fuzzification measures in a fuzzy pure factors portfolio
There are several methods to convert fuzzy or stochastic LP to conventional LP models. In this simple paper we evaluate the effectiveness of three proposed methods, using a numerical example from a pure factors portfolio.: fuzzy; stochastic; linear programming; pure factors portfolio
The Inflation Technique for Causal Inference with Latent Variables
The problem of causal inference is to determine if a given probability
distribution on observed variables is compatible with some causal structure.
The difficult case is when the causal structure includes latent variables. We
here introduce the for tackling this problem. An
inflation of a causal structure is a new causal structure that can contain
multiple copies of each of the original variables, but where the ancestry of
each copy mirrors that of the original. To every distribution of the observed
variables that is compatible with the original causal structure, we assign a
family of marginal distributions on certain subsets of the copies that are
compatible with the inflated causal structure. It follows that compatibility
constraints for the inflation can be translated into compatibility constraints
for the original causal structure. Even if the constraints at the level of
inflation are weak, such as observable statistical independences implied by
disjoint causal ancestry, the translated constraints can be strong. We apply
this method to derive new inequalities whose violation by a distribution
witnesses that distribution's incompatibility with the causal structure (of
which Bell inequalities and Pearl's instrumental inequality are prominent
examples). We describe an algorithm for deriving all such inequalities for the
original causal structure that follow from ancestral independences in the
inflation. For three observed binary variables with pairwise common causes, it
yields inequalities that are stronger in at least some aspects than those
obtainable by existing methods. We also describe an algorithm that derives a
weaker set of inequalities but is more efficient. Finally, we discuss which
inflations are such that the inequalities one obtains from them remain valid
even for quantum (and post-quantum) generalizations of the notion of a causal
model.Comment: Minor final corrections, updated to match the published version as
closely as possibl
Techniques for clustering gene expression data
Many clustering techniques have been proposed for the analysis of gene expression data obtained from microarray experiments. However, choice of suitable method(s) for a given experimental dataset is not straightforward. Common approaches do not translate well and fail to take account of the data profile. This review paper surveys state of the art applications which recognises these limitations and implements procedures to overcome them. It provides a framework for the evaluation of clustering in gene expression analyses. The nature of microarray data is discussed briefly. Selected examples are presented for the clustering methods considered
- …