5,925 research outputs found
Dual Representation of Quasiconvex Conditional Maps
We provide a dual representation of quasiconvex maps between two lattices of
random variables in terms of conditional expectations. This generalizes the
dual representation of quasiconvex real valued functions and the dual
representation of conditional convex maps.Comment: Date changed Added one remark on assumption (c), page
Vector Quantile Regression: An Optimal Transport Approach
We propose a notion of conditional vector quantile function and a vector
quantile regression. A \emph{conditional vector quantile function} (CVQF) of a
random vector , taking values in given covariates ,
taking values in , is a map ,
which is monotone, in the sense of being a gradient of a convex function, and
such that given that vector follows a reference non-atomic distribution
, for instance uniform distribution on a unit cube in , the
random vector has the distribution of conditional on
. Moreover, we have a strong representation, almost
surely, for some version of . The \emph{vector quantile regression} (VQR) is
a linear model for CVQF of given . Under correct specification, the
notion produces strong representation, , for
denoting a known set of transformations of , where is a monotone map, the gradient of a convex function, and
the quantile regression coefficients have the
interpretations analogous to that of the standard scalar quantile regression.
As becomes a richer class of transformations of , the model becomes
nonparametric, as in series modelling. A key property of VQR is the embedding
of the classical Monge-Kantorovich's optimal transportation problem at its core
as a special case. In the classical case, where is scalar, VQR reduces to a
version of the classical QR, and CVQF reduces to the scalar conditional
quantile function. An application to multiple Engel curve estimation is
considered
A unified approach to time consistency of dynamic risk measures and dynamic performance measures in discrete time
In this paper we provide a flexible framework allowing for a unified study of
time consistency of risk measures and performance measures (also known as
acceptability indices). The proposed framework not only integrates existing
forms of time consistency, but also provides a comprehensive toolbox for
analysis and synthesis of the concept of time consistency in decision making.
In particular, it allows for in depth comparative analysis of (most of) the
existing types of time consistency -- a feat that has not be possible before
and which is done in the companion paper [BCP2016] to this one. In our approach
the time consistency is studied for a large class of maps that are postulated
to satisfy only two properties -- monotonicity and locality. The time
consistency is defined in terms of an update rule. The form of the update rule
introduced here is novel, and is perfectly suited for developing the unifying
framework that is worked out in this paper. As an illustration of the
applicability of our approach, we show how to recover almost all concepts of
weak time consistency by means of constructing appropriate update rules
Quasi-probability representations of quantum theory with applications to quantum information science
This article comprises a review of both the quasi-probability representations
of infinite-dimensional quantum theory (including the Wigner function) and the
more recently defined quasi-probability representations of finite-dimensional
quantum theory. We focus on both the characteristics and applications of these
representations with an emphasis toward quantum information theory. We discuss
the recently proposed unification of the set of possible quasi-probability
representations via frame theory and then discuss the practical relevance of
negativity in such representations as a criteria for quantumness.Comment: v3: typos fixed, references adde
Dynamic Assessment Indices
This paper provides a unified framework, which allows, in particular, to
study the structure of dynamic monetary risk measures and dynamic acceptability
indices. The main mathematical tool, which we use here, and which allows us to
significantly generalize existing results is the theory of -modules. In
the first part of the paper we develop the general theory and provide a robust
representation of conditional assessment indices, and in the second part we
apply this theory to dynamic acceptability indices acting on stochastic
processes.Comment: 39 page
Active Mean Fields for Probabilistic Image Segmentation: Connections with Chan-Vese and Rudin-Osher-Fatemi Models
Segmentation is a fundamental task for extracting semantically meaningful
regions from an image. The goal of segmentation algorithms is to accurately
assign object labels to each image location. However, image-noise, shortcomings
of algorithms, and image ambiguities cause uncertainty in label assignment.
Estimating the uncertainty in label assignment is important in multiple
application domains, such as segmenting tumors from medical images for
radiation treatment planning. One way to estimate these uncertainties is
through the computation of posteriors of Bayesian models, which is
computationally prohibitive for many practical applications. On the other hand,
most computationally efficient methods fail to estimate label uncertainty. We
therefore propose in this paper the Active Mean Fields (AMF) approach, a
technique based on Bayesian modeling that uses a mean-field approximation to
efficiently compute a segmentation and its corresponding uncertainty. Based on
a variational formulation, the resulting convex model combines any
label-likelihood measure with a prior on the length of the segmentation
boundary. A specific implementation of that model is the Chan-Vese segmentation
model (CV), in which the binary segmentation task is defined by a Gaussian
likelihood and a prior regularizing the length of the segmentation boundary.
Furthermore, the Euler-Lagrange equations derived from the AMF model are
equivalent to those of the popular Rudin-Osher-Fatemi (ROF) model for image
denoising. Solutions to the AMF model can thus be implemented by directly
utilizing highly-efficient ROF solvers on log-likelihood ratio fields. We
qualitatively assess the approach on synthetic data as well as on real natural
and medical images. For a quantitative evaluation, we apply our approach to the
icgbench dataset
Quantum and classical resources for unitary design of open-system evolutions
A variety of tasks in quantum control, ranging from purification and cooling to quantum stabilisation and open-system simulation, rely on the ability to implement a target quantum channel over a specified time interval within prescribed accuracy. This can be achieved by engineering a suitable unitary dynamics of the system of interest along with its environment, which, depending on the available level of control, is fully or partly exploited as a coherent quantum controller. After formalising a controllability framework for completely positive trace-preserving quantum dynamics, we provide sufficient conditions on the environment state and dimension that allow for the realisation of relevant classes of quantum channels, including extreme channels, stochastic unitaries or simply any channel. The results hinge on generalisations of Stinespring's dilation via a subsystem principle. In the process, we show that a conjecture by Lloyd on the minimal dimension of the environment required for arbitrary open-system simulation, albeit formally disproved, can in fact be salvaged, provided that classical randomisation is included among the available resources. Existing measurement-based feedback protocols for universal simulation, dynamical decoupling and dissipative state preparation are recast within the proposed coherent framework as concrete applications, and the resources they employ discussed in the light of the general results
- …