25,134 research outputs found
Transductive-Inductive Cluster Approximation Via Multivariate Chebyshev Inequality
Approximating adequate number of clusters in multidimensional data is an open
area of research, given a level of compromise made on the quality of acceptable
results. The manuscript addresses the issue by formulating a transductive
inductive learning algorithm which uses multivariate Chebyshev inequality.
Considering clustering problem in imaging, theoretical proofs for a particular
level of compromise are derived to show the convergence of the reconstruction
error to a finite value with increasing (a) number of unseen examples and (b)
the number of clusters, respectively. Upper bounds for these error rates are
also proved. Non-parametric estimates of these error from a random sample of
sequences empirically point to a stable number of clusters. Lastly, the
generalization of algorithm can be applied to multidimensional data sets from
different fields.Comment: 16 pages, 5 figure
Structure of Probabilistic Information and Quantum Laws
In quantum experiments the acquisition and representation of basic
experimental information is governed by the multinomial probability
distribution. There exist unique random variables, whose standard deviation
becomes asymptotically invariant of physical conditions. Representing all
information by means of such random variables gives the quantum mechanical
probability amplitude and a real alternative. For predictions, the linear
evolution law (Schrodinger or Dirac equation) turns out to be the only way to
extend the invariance property of the standard deviation to the predicted
quantities. This indicates that quantum theory originates in the structure of
gaining pure, probabilistic information, without any mechanical underpinning.Comment: RevTex, 6 pages incl. 2 figures. Contribution to conference
"Foundations of Probability and Physics", Vaxjo, Sweden, 27 Nov. - 1 Dec.
200
Affine arithmetic-based methodology for energy hub operation-scheduling in the presence of data uncertainty
In this study, the role of self-validated computing for solving the energy hub-scheduling problem in the presence of multiple and heterogeneous sources of data uncertainties is explored and a new solution paradigm based on affine arithmetic is conceptualised. The benefits deriving from the application of this methodology are analysed in details, and several numerical results are presented and discussed
Factory of realities: on the emergence of virtual spatiotemporal structures
The ubiquitous nature of modern Information Retrieval and Virtual World give
rise to new realities. To what extent are these "realities" real? Which
"physics" should be applied to quantitatively describe them? In this essay I
dwell on few examples. The first is Adaptive neural networks, which are not
networks and not neural, but still provide service similar to classical ANNs in
extended fashion. The second is the emergence of objects looking like
Einsteinian spacetime, which describe the behavior of an Internet surfer like
geodesic motion. The third is the demonstration of nonclassical and even
stronger-than-quantum probabilities in Information Retrieval, their use.
Immense operable datasets provide new operationalistic environments, which
become to greater and greater extent "realities". In this essay, I consider the
overall Information Retrieval process as an objective physical process,
representing it according to Melucci metaphor in terms of physical-like
experiments. Various semantic environments are treated as analogs of various
realities. The readers' attention is drawn to topos approach to physical
theories, which provides a natural conceptual and technical framework to cope
with the new emerging realities.Comment: 21 p
Randomized Constraints Consensus for Distributed Robust Linear Programming
In this paper we consider a network of processors aiming at cooperatively
solving linear programming problems subject to uncertainty. Each node only
knows a common cost function and its local uncertain constraint set. We propose
a randomized, distributed algorithm working under time-varying, asynchronous
and directed communication topology. The algorithm is based on a local
computation and communication paradigm. At each communication round, nodes
perform two updates: (i) a verification in which they check-in a randomized
setup-the robust feasibility (and hence optimality) of the candidate optimal
point, and (ii) an optimization step in which they exchange their candidate
bases (minimal sets of active constraints) with neighbors and locally solve an
optimization problem whose constraint set includes: a sampled constraint
violating the candidate optimal point (if it exists), agent's current basis and
the collection of neighbor's basis. As main result, we show that if a processor
successfully performs the verification step for a sufficient number of
communication rounds, it can stop the algorithm since a consensus has been
reached. The common solution is-with high confidence-feasible (and hence
optimal) for the entire set of uncertainty except a subset having arbitrary
small probability measure. We show the effectiveness of the proposed
distributed algorithm on a multi-core platform in which the nodes communicate
asynchronously.Comment: Accepted for publication in the 20th World Congress of the
International Federation of Automatic Control (IFAC
- âŠ