348,411 research outputs found
Data Fine-tuning
In real-world applications, commercial off-the-shelf systems are utilized for
performing automated facial analysis including face recognition, emotion
recognition, and attribute prediction. However, a majority of these commercial
systems act as black boxes due to the inaccessibility of the model parameters
which makes it challenging to fine-tune the models for specific applications.
Stimulated by the advances in adversarial perturbations, this research proposes
the concept of Data Fine-tuning to improve the classification accuracy of a
given model without changing the parameters of the model. This is accomplished
by modeling it as data (image) perturbation problem. A small amount of "noise"
is added to the input with the objective of minimizing the classification loss
without affecting the (visual) appearance. Experiments performed on three
publicly available datasets LFW, CelebA, and MUCT, demonstrate the
effectiveness of the proposed concept.Comment: Accepted in AAAI 201
Naturalness and GUT Scale Yukawa Coupling Ratios in the CMSSM
We analyse the fine-tuning in the Constrained Minimal Supersymmetric Standard
Model (CMSSM) in the light of the present and expected ATLAS and CMS SUSY
searches. Even with 10/fb of data and no discovery of SUSY valid regions might
remain with fine-tuning less than 20. Moreover we investigate the fine-tuning
price of GUT scale Yukawa coupling relations. Considering a 2
constraint for and fine-tuning less than 30 yields an allowed range
of , which points towards the alternative GUT
prediction . Relaxing the constraint to 5
extends the possible region to [1.02,1.70], allowing for approximate
Yukawa coupling unification.Comment: 13 pages, 3 figures; version published in PR
The fine-tuning cost of the likelihood in SUSY models
In SUSY models, the fine tuning of the electroweak (EW) scale with respect to
their parameters gamma_i={m_0, m_{1/2}, mu_0, A_0, B_0,...} and the maximal
likelihood L to fit the experimental data are usually regarded as two different
problems. We show that, if one regards the EW minimum conditions as constraints
that fix the EW scale, this commonly held view is not correct and that the
likelihood contains all the information about fine-tuning. In this case we show
that the corrected likelihood is equal to the ratio L/Delta of the usual
likelihood L and the traditional fine tuning measure Delta of the EW scale. A
similar result is obtained for the integrated likelihood over the set
{gamma_i}, that can be written as a surface integral of the ratio L/Delta, with
the surface in gamma_i space determined by the EW minimum constraints. As a
result, a large likelihood actually demands a large ratio L/Delta or
equivalently, a small chi^2_{new}=chi^2_{old}+2*ln(Delta). This shows the
fine-tuning cost to the likelihood (chi^2_{new}) of the EW scale stability
enforced by SUSY, that is ignored in data fits. A good
chi^2_{new}/d.o.f.\approx 1 thus demands SUSY models have a fine tuning amount
Delta<<exp(d.o.f./2), which provides a model-independent criterion for
acceptable fine-tuning. If this criterion is not met, one can thus rule out
SUSY models without a further chi^2/d.o.f. analysis. Numerical methods to fit
the data can easily be adapted to account for this effect.Comment: 10 pages (v3: small comment added
Transductive data-selection algorithms for fine-tuning neural machine translation
Machine Translation models are trained to translate a variety of documents from one language into another. However, models specifically trained for a particular characteristics of the documents tend to perform better. Fine-tuning is a technique for adapting an NMT model to some domain. In this work, we want to use this technique to adapt the model to a given test set. In particular, we are using transductive data selection algorithms which take advantage the information of the test set to retrieve sentences from a larger parallel set
- …