51,226 research outputs found
Dynamic Distance Measures on Spaces of Isospectral Mixed Quantum States
Distance measures are indispensable tools in quantum information processing
and quantum computing. This since they can be used to quantify to what extent
information is preserved, or altered, by quantum processes. In this paper we
propose a new distance measure for mixed quantum states, that we call the
dynamic distance measure, and show that it is a proper distance measure. The
dynamic distance measure is defined in terms of a measurable quantity, which
make it very suitable for applications. In a final section we compare the
dynamical distance measure with the well-known Bures distance.Comment: 8 pages, no figure
Different distance measures for fuzzy linear regression with Monte Carlo methods
The aim of this study was to determine the best distance measure for estimating the fuzzy linear regression model parameters with Monte Carlo (MC) methods. It is pointed out that only one distance measure is used for fuzzy linear regression with MC methods within the literature. Therefore, three different definitions of distance measure between two fuzzy numbers are introduced. Estimation accuracies of existing and proposed distance measures are explored with the simulation study. Distance measures are compared to each other in terms of estimation accuracy; hence this study demonstrates that the best distance measures to estimate fuzzy linear regression model parameters with MC methods are the distance measures defined by Kaufmann and Gupta (Introduction to fuzzy arithmetic theory and applications. Van Nostrand Reinhold, New York, 1991), Heilpern-2 (Fuzzy Sets Syst 91(2):259–268, 1997) and Chen and Hsieh (Aust J Intell Inf Process Syst 6(4):217–229, 2000). One the other hand, the worst distance measure is the distance measure used by Abdalla and Buckley (Soft Comput 11:991–996, 2007; Soft Comput 12:463–468, 2008). These results would be useful to enrich the studies that have already focused on fuzzy linear regression models
Linking a distance measure of entanglement to its convex roof
An important problem in quantum information theory is the quantification of
entanglement in multipartite mixed quantum states. In this work, a connection
between the geometric measure of entanglement and a distance measure of
entanglement is established. We present a new expression for the geometric
measure of entanglement in terms of the maximal fidelity with a separable
state. A direct application of this result provides a closed expression for the
Bures measure of entanglement of two qubits. We also prove that the number of
elements in an optimal decomposition w.r.t. the geometric measure of
entanglement is bounded from above by the Caratheodory bound, and we find
necessary conditions for the structure of an optimal decomposition.Comment: 11 pages, 4 figure
Generalized trace distance measure connecting quantum and classical non-Markovianity
We establish a direct connection of quantum Markovianity of an open quantum
system to its classical counterpart by generalizing the criterion based on the
information flow. Here, the flow is characterized by the time evolution of
Helstrom matrices, given by the weighted difference of statistical operators,
under the action of the quantum dynamical evolution. It turns out that the
introduced criterion is equivalent to P-divisibility of a quantum process,
namely divisibility in terms of positive maps, which provides a direct
connection to classical Markovian stochastic processes. Moreover, it is shown
that similar mathematical representations as those found for the original trace
distance based measure hold true for the associated, generalized measure for
quantum non-Markovianity. That is, we prove orthogonality of optimal states
showing a maximal information backflow and establish a local and universal
representation of the measure. We illustrate some properties of the generalized
criterion by means of examples.Comment: 11 pages, 3 figure
Boosting Nearest Neighbor Classifiers for Multiclass Recognition
This paper introduces an algorithm that uses boosting to learn a distance measure for multiclass k-nearest neighbor classification. Given a family of distance measures as input, AdaBoost is used to learn a weighted distance measure, that is a linear combination of the input measures. The proposed method can be seen both as a novel way to learn a distance measure from data, and as a novel way to apply boosting to multiclass recognition problems, that does not require output codes. In our approach, multiclass recognition of objects is reduced into a single binary recognition task, defined on triples of objects. Preliminary experiments with eight UCI datasets yield no clear winner among our method, boosting using output codes, and k-nn classification using an unoptimized distance measure. Our algorithm did achieve lower error rates in some of the datasets, which indicates that, in some domains, it may lead to better results than existing methods
Retinal metric: a stimulus distance measure derived from population neural responses
The ability of the organism to distinguish between various stimuli is limited
by the structure and noise in the population code of its sensory neurons. Here
we infer a distance measure on the stimulus space directly from the recorded
activity of 100 neurons in the salamander retina. In contrast to previously
used measures of stimulus similarity, this "neural metric" tells us how
distinguishable a pair of stimulus clips is to the retina, given the noise in
the neural population response. We show that the retinal distance strongly
deviates from Euclidean, or any static metric, yet has a simple structure: we
identify the stimulus features that the neural population is jointly sensitive
to, and show the SVM-like kernel function relating the stimulus and neural
response spaces. We show that the non-Euclidean nature of the retinal distance
has important consequences for neural decoding.Comment: 5 pages, 4 figures, to appear in Phys Rev Let
Recommended from our members
ACO for continuous function optimization: a performance analysis
The performance of the meta-heuristic algorithms often depends on their parameter settings. Appropriate tuning of the underlying parameters can drastically improve the performance of a meta-heuristic. The Ant Colony Optimization (ACO), a population based meta-heuristic algorithm inspired by the foraging behavior of the ants, is no different. Fundamentally, the ACO depends on the construction of new solutions, variable by variable basis using Gaussian sampling of the selected variables from an archive of solutions. A comprehensive performance analysis of the underlying parameters such as: selection strategy, distance measure metric and pheromone evaporation rate of the ACO suggests that the Roulette Wheel Selection strategy enhances the performance of the ACO due to its ability to provide non-uniformity and adequate diversity in the selection of a solution. On the other hand, the Squared Euclidean distance-measure metric offers better performance than other distance-measure metrics. It is observed from the analysis that the ACO is sensitive towards the evaporation rate. Experimental analysis between classical ACO and other meta-heuristic suggested that the performance of the well-tuned ACO surpasses its counterparts
- …