1,824 research outputs found
Social Security Incidence under Uncertainty Assessing Italian Reforms
This paper analyzes the welfare effects of the Italian social security system in an economy with uncertainty on wages, financial market returns and life expectancy. The introduction of a pension system reproducing the Italian statutory scheme turns out to decrease ex-ante individual welfare, unless restrictions are assumed on retirement behavior. Overall, risk insurance effects of social security play a minor role in determining welfare variations. The new Italian NDC pension system is shown to yield a slight ex-ante welfare improvement from a purely risk-insurance perspective. This relative gain stems from risk diversification across working-life wages in computing benefits.social security reforms, uncertainty, risk insurance
zeta-function regularization and one-loop renormalization of field fluctuations in curved space-times
A method to regularize and renormalize the fluctuations of a quantum field in
a curved background in the -function approach is presented. The method
produces finite quantities directly and finite scale-parametrized counterterms
at most. These finite couterterms are related to the presence of a particular
pole of the effective-action function as well as to the heat kernel
coefficients. The method is checked in several examples obtaining known or
reasonable results. Finally, comments are given for as it concerns the recent
proposal by Frolov et al. to get the finite Bekenstein-Hawking entropy from
Sakharov's induced gravity theory.Comment: 9 pages, standard LaTeX, no figure
Dense semantic labeling of sub-decimeter resolution images with convolutional neural networks
Semantic labeling (or pixel-level land-cover classification) in ultra-high
resolution imagery (< 10cm) requires statistical models able to learn high
level concepts from spatial data, with large appearance variations.
Convolutional Neural Networks (CNNs) achieve this goal by learning
discriminatively a hierarchy of representations of increasing abstraction.
In this paper we present a CNN-based system relying on an
downsample-then-upsample architecture. Specifically, it first learns a rough
spatial map of high-level representations by means of convolutions and then
learns to upsample them back to the original resolution by deconvolutions. By
doing so, the CNN learns to densely label every pixel at the original
resolution of the image. This results in many advantages, including i)
state-of-the-art numerical accuracy, ii) improved geometric accuracy of
predictions and iii) high efficiency at inference time.
We test the proposed system on the Vaihingen and Potsdam sub-decimeter
resolution datasets, involving semantic labeling of aerial images of 9cm and
5cm resolution, respectively. These datasets are composed by many large and
fully annotated tiles allowing an unbiased evaluation of models making use of
spatial information. We do so by comparing two standard CNN architectures to
the proposed one: standard patch classification, prediction of local label
patches by employing only convolutions and full patch labeling by employing
deconvolutions. All the systems compare favorably or outperform a
state-of-the-art baseline relying on superpixels and powerful appearance
descriptors. The proposed full patch labeling CNN outperforms these models by a
large margin, also showing a very appealing inference time.Comment: Accepted in IEEE Transactions on Geoscience and Remote Sensing, 201
Kernel Manifold Alignment
We introduce a kernel method for manifold alignment (KEMA) and domain
adaptation that can match an arbitrary number of data sources without needing
corresponding pairs, just few labeled examples in all domains. KEMA has
interesting properties: 1) it generalizes other manifold alignment methods, 2)
it can align manifolds of very different complexities, performing a sort of
manifold unfolding plus alignment, 3) it can define a domain-specific metric to
cope with multimodal specificities, 4) it can align data spaces of different
dimensionality, 5) it is robust to strong nonlinear feature deformations, and
6) it is closed-form invertible which allows transfer across-domains and data
synthesis. We also present a reduced-rank version for computational efficiency
and discuss the generalization performance of KEMA under Rademacher principles
of stability. KEMA exhibits very good performance over competing methods in
synthetic examples, visual object recognition and recognition of facial
expressions tasks
Estimating of Capacity of Escalators in London Underground
In this paper we discuss a deterministic model for computing the capacity of the escalator in London Underground. We develop this model from fundamental principles of engineering by separating the capacities of standing and walking side of the escalator. By collecting real world data, we find the accuracy of this capacity computation. We also develop a multiple regression model that considers the effect of rise of the escalator with the capacity. We discuss the technical and behavioural reasons for differences in capacities of two methods.
Non-convex regularization in remote sensing
In this paper, we study the effect of different regularizers and their
implications in high dimensional image classification and sparse linear
unmixing. Although kernelization or sparse methods are globally accepted
solutions for processing data in high dimensions, we present here a study on
the impact of the form of regularization used and its parametrization. We
consider regularization via traditional squared (2) and sparsity-promoting (1)
norms, as well as more unconventional nonconvex regularizers (p and Log Sum
Penalty). We compare their properties and advantages on several classification
and linear unmixing tasks and provide advices on the choice of the best
regularizer for the problem at hand. Finally, we also provide a fully
functional toolbox for the community.Comment: 11 pages, 11 figure
- …