8,872 research outputs found
First order algorithms in variational image processing
Variational methods in imaging are nowadays developing towards a quite
universal and flexible tool, allowing for highly successful approaches on tasks
like denoising, deblurring, inpainting, segmentation, super-resolution,
disparity, and optical flow estimation. The overall structure of such
approaches is of the form ; where the functional is a data fidelity term also
depending on some input data and measuring the deviation of from such
and is a regularization functional. Moreover is a (often linear)
forward operator modeling the dependence of data on an underlying image, and
is a positive regularization parameter. While is often
smooth and (strictly) convex, the current practice almost exclusively uses
nonsmooth regularization functionals. The majority of successful techniques is
using nonsmooth and convex functionals like the total variation and
generalizations thereof or -norms of coefficients arising from scalar
products with some frame system. The efficient solution of such variational
problems in imaging demands for appropriate algorithms. Taking into account the
specific structure as a sum of two very different terms to be minimized,
splitting algorithms are a quite canonical choice. Consequently this field has
revived the interest in techniques like operator splittings or augmented
Lagrangians. Here we shall provide an overview of methods currently developed
and recent results as well as some computational studies providing a comparison
of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
Italian Senate apportionment: is the 2007 proposal fair?
Since the political collapse of the 90’s, and in particular since the bicameral commission experience of 1997, Italian governments have always tried to face the need for wide constitutional reform. Reductions in the number of deputies and senators have been planned on several occasions. The purpose of this paper is to analyze whether or not the proposed reforms to the apportionment of seats in the Italian senate is fair. We use the theory of power indices to compare different scenarios. We show that the intended reform produces an outcome that is worse than both the ideal situation and the actual situation.power index, Banzhaf, Italian Senate
The Many Faces of Heterogeneous Ice Nucleation: Interplay Between Surface Morphology and Hydrophobicity
What makes a material a good ice nucleating agent? Despite the importance of
heterogeneous ice nucleation to a variety of fields, from cloud science to
microbiology, major gaps in our understanding of this ubiquitous process still
prevent us from answering this question. In this work, we have examined the
ability of generic crystalline substrates to promote ice nucleation as a
function of the hydrophobicity and the morphology of the surface. Nucleation
rates have been obtained by brute-force molecular dynamics simulations of
coarse-grained water on top of different surfaces of a model fcc crystal,
varying the water-surface interaction and the surface lattice parameter. It
turns out that the lattice mismatch of the surface with respect to ice,
customarily regarded as the most important requirement for a good ice
nucleating agent, is at most desirable but not a requirement. On the other
hand, the balance between the morphology of the surface and its hydrophobicity
can significantly alter the ice nucleation rate and can also lead to the
formation of up to three different faces of ice on the same substrate. We have
pinpointed three circumstances where heterogeneous ice nucleation can be
promoted by the crystalline surface: (i) the formation of a water overlayer
that acts as an in-plane template; (ii) the emergence of a contact layer
buckled in an ice-like manner; and (iii) nucleation on compact surfaces with
very high interaction strength. We hope that this extensive systematic study
will foster future experimental work aimed at testing the physiochemical
understanding presented herein.Comment: Main + S
Disparity and Optical Flow Partitioning Using Extended Potts Priors
This paper addresses the problems of disparity and optical flow partitioning
based on the brightness invariance assumption. We investigate new variational
approaches to these problems with Potts priors and possibly box constraints.
For the optical flow partitioning, our model includes vector-valued data and an
adapted Potts regularizer. Using the notation of asymptotically level stable
functions we prove the existence of global minimizers of our functionals. We
propose a modified alternating direction method of minimizers. This iterative
algorithm requires the computation of global minimizers of classical univariate
Potts problems which can be done efficiently by dynamic programming. We prove
that the algorithm converges both for the constrained and unconstrained
problems. Numerical examples demonstrate the very good performance of our
partitioning method
The results of Italy’s 2012 labour-market reforms – no solution to unemployment
Gabriele Piazza and Martin Myant of the European Trade Union Institute criticise recent labour market reforms in Italy which aim to tackle unemployment by cutting protection for workers on permanent contracts. There is no evidence that this works, and Italy would be better off addressing structural problems in the Italian econom
Individual claims reserving using the Aalen--Johansen estimator
We propose an individual claims reserving model based on the conditional
Aalen--Johansen estimator, as developed in Bladt and Furrer (2023b). In our
approach, we formulate a multi-state problem, where the underlying variable is
the individual claim size, rather than time. The states in this model represent
development periods, and we estimate the cumulative density function of
individual claim costs using the conditional Aalen--Johansen method as
transition probabilities to an absorbing state. Our methodology reinterprets
the concept of multi-state models and offers a strategy for modeling the
complete curve of individual claim costs. To illustrate our approach, we apply
our model to both simulated and real datasets. Having access to the entire
dataset enables us to support the use of our approach by comparing the
predicted total final cost with the actual amount, as well as evaluating it in
terms of the continuously ranked probability score, as discussed in Gneiting
and A. E. Raftery (2007
Three-dimensional N=2 supergravity theories: From superspace to components
For general off-shell N=2 supergravity-matter systems in three spacetime
dimensions, a formalism is developed to reduce the corresponding actions from
superspace to components. The component actions are explicitly computed in the
cases of Type I and Type II minimal supergravity formulations. We describe the
models for topologically massive supergravity which correspond to all the known
off-shell formulations for three-dimensional N=2 supergravity. We also present
a universal setting to construct supersymmetric backgrounds associated with
these off-shell supergravities.Comment: 79 pages; V3: minor corrections, version published in PR
Reducing the computational effort of MPC with closed-loop optimal sequences of affine laws
We consider the classical infinite-horizon constrained linear-quadratic regulator (CLQR) problem and its receding-horizon variant used in model predictive control (MPC). If the terminal constraints are inactive for the current initial condition, the optimal input signal sequence that results for the open-loop CLQR problem is equal to the closed-loop optimal sequence that results for MPC. Consequently, the closed-loop optimal solution is available from solving only one CLQR problem instead of the usual infinite number of CLQR problems solved on the receding horizon. In the presence of disturbances or because of plant-model mismatch, the system will eventually leave the predicted optimal trajectory. Consequently, the solution of the single open-loop CLQR problem is no longer optimal, and the receding horizon problem must resume. We show, however, that the open-loop solution is also robust. Robustness essentially is given, because the solution of the CLQR problem not only provides the sequence of nominally optimal input signals, but a sequence of optimal affine laws along with their polytopes of validity. We analyze the degree of robustness by computational experiments. The results indicate the degree of robustness is practically relevant
- …