10,508 research outputs found
A remark on the radial minimizer of the Ginzburg-Landau functional
Denote by the Ginzburg-Landau functional in the plane and let
be the radial solution to the Euler equation associated
to the problem . Let be a smooth, bounded domain with the same area as . Denoted by
we prove \min_{v \in \mathcal{K}} E_\varepsilon (v,\Omega)\le
E_\varepsilon (\tilde u_\varepsilon,B_1). $
A case of thyroid cancer
Diagnosis and timely therapeutic intervention permitted a correct surgical approach in a case of papillary carcinoma with lymph nodes metastasespeer-reviewe
Finite-time influence systems and the Wisdom of Crowd effect
Recent contributions have studied how an influence system may affect the
wisdom of crowd phenomenon. In the so-called naive learning setting, a crowd of
individuals holds opinions that are statistically independent estimates of an
unknown parameter; the crowd is wise when the average opinion converges to the
true parameter in the limit of infinitely many individuals. Unfortunately, even
starting from wise initial opinions, a crowd subject to certain influence
systems may lose its wisdom. It is of great interest to characterize when an
influence system preserves the crowd wisdom effect. In this paper we introduce
and characterize numerous wisdom preservation properties of the basic
French-DeGroot influence system model. Instead of requiring complete
convergence to consensus as in the previous naive learning model by Golub and
Jackson, we study finite-time executions of the French-DeGroot influence
process and establish in this novel context the notion of prominent families
(as a group of individuals with outsize influence). Surprisingly, finite-time
wisdom preservation of the influence system is strictly distinct from its
infinite-time version. We provide a comprehensive treatment of various
finite-time wisdom preservation notions, counterexamples to meaningful
conjectures, and a complete characterization of equal-neighbor influence
systems
Scalable Greedy Algorithms for Transfer Learning
In this paper we consider the binary transfer learning problem, focusing on
how to select and combine sources from a large pool to yield a good performance
on a target task. Constraining our scenario to real world, we do not assume the
direct access to the source data, but rather we employ the source hypotheses
trained from them. We propose an efficient algorithm that selects relevant
source hypotheses and feature dimensions simultaneously, building on the
literature on the best subset selection problem. Our algorithm achieves
state-of-the-art results on three computer vision datasets, substantially
outperforming both transfer learning and popular feature selection baselines in
a small-sample setting. We also present a randomized variant that achieves the
same results with the computational cost independent from the number of source
hypotheses and feature dimensions. Also, we theoretically prove that, under
reasonable assumptions on the source hypotheses, our algorithm can learn
effectively from few examples
Multiclass latent locally linear support vector machines
Kernelized Support Vector Machines (SVM) have gained the status of off-the-shelf classifiers, able to deliver state of the art performance on almost any problem. Still, their practical use is constrained by their computational and memory complexity, which grows super-linearly with the number of training samples. In order to retain the low training and testing complexity of linear classifiers and the exibility of non linear ones, a growing, promising alternative is represented by methods that learn non-linear classifiers through local combinations of linear ones. In this paper we propose a new multi class local classifier, based on a latent SVM formulation. The proposed classifier makes use of a set of linear models that are linearly combined using sample and class specific weights. Thanks to the latent formulation, the combination coefficients are modeled as latent variables. We allow soft combinations and we provide a closed-form solution for their estimation, resulting in an efficient prediction rule. This novel formulation allows to learn in a principled way the sample specific weights and the linear classifiers, in a unique optimization problem, using a CCCP optimization procedure. Extensive experiments on ten standard UCI machine learning datasets, one large binary dataset, three character and digit recognition databases, and a visual place categorization dataset show the power of the proposed approach
Transfer learning through greedy subset selection
We study the binary transfer learning problem, focusing on how to select sources from a large pool and how to combine them to yield a good performance on a target task. In particular, we consider the transfer learning setting where one does not have direct access to the source data, but rather employs the source hypotheses trained from them. Building on the literature on the best subset selection problem, we propose an efficient algorithm that selects relevant source hypotheses and feature dimensions simultaneously. On three computer vision datasets we achieve state-of-the-art results, substantially outperforming transfer learning and popular feature selection baselines in a small-sample setting. Also, we theoretically prove that, under reasonable assumptions on the source hypotheses, our algorithm can learn effectively from few examples
Macroeconomic Modelling and the Effects of Policy Reforms: an Assessment for Italy using ITEM and
In this paper we compare the dynamic properties of the Italian Treasury Econometric Model (ITEM) with those of QUEST III, the endogenous growth model of the European Commission (DG ECFIN) in the version calibrated for Italy. We consider an array of shocks often examined in policy simulations and investigate their implications on macro variables. In doing so, we analyse the main transmission channels in the two models and provide a comparative assessment of the magnitude and the persistence of the effects, trying to ascertain whether the responses to shocks are consistent with the predictions of economic theory. We show that, despite substantial differences between the two models, the responses of the key variables are qualitatively similar when we consider competition enhancing policies and labour productivity improvements. On the other hand, we observe quantitative disparities between the two models, mainly due to the forward-looking behaviour and the endogenous growth mechanism incorporated into the QUEST model but not in ITEM. The simulation results show that Quest III is a powerful tool to capture the effects of structural economic reforms, like competitionenhancing policies or innovation-promoting policies. On the other hand, owing to the breakdown of fiscal variables in a large number of components, ITEM is arguably more suitable for the quantitative evaluation of fiscal policy and the study of the impact of reforms on the public sector balance sheet.Economic Modelling, DSGE, Structural Reforms, Italy
- âŠ