118,200 research outputs found
Cost Functions and Model Combination for VaR-based Asset Allocation using Neural Networks
We introduce an asset-allocation framework based on the active control of the value-at- risk of the portfolio. Within this framework, we compare two paradigms for making the allocation using neural networks. The first one uses the network to make a forecast of asset behavior, in conjunction with a traditional mean-variance allocator for constructing the portfolio. The second paradigm uses the network to directly make the portfolio allocation decisions. We consider a method for performing soft input variable selection, and show its considerable utility. We use model combination (committee) methods to systematize the choice of hyperparemeters during training. We show that committees using both paradigms are significantly outperforming the benchmark market performance. Nous introduisons un cadre d'allocation d'actifs basé sur le contrôle actif de la valeur à risque d'un portefeuille. À l'intérieur de ce cadre, nous comparons deux paradigmes pour faire cette allocation à l'aide de réseaux de neurones. Le premier paradigme utilise le réseau de neurones pour faire une prédiction sur le comportement de l'actif, en conjonction avec un allocateur traditionnel de moyenne-variance pour la construction du portefeuille. Le deuxième paradigme utilise le réseau pour faire directement les décisions d'allocation du portefeuille. Nous considérons une méthode qui accomplit une sélection de variable douce sur les entrées, et nous montrons sa très grande utilité. Nous utilisons également des méthodes de combinaison de modèles (comité) pour choisir systématiquement les hyper-paramètres pendant l'entraînement. Finalement, nous montrons que les comités utilisant les deux paradigmes surpassent de façon significative les performances d'un banc d'essai du marché.Value-at-risk, asset allocation, financial performance criterion, model combination, recurrent multilayer neural networks, Valeur à risque, allocation d'actif, critère de performance financière, combinaison de modèles, réseau de neurones récurrents multi-couches
Top-quark mass measurements: review and perspectives
The top quark is the heaviest elementary particle known and its mass () is a fundamental parameter of the Standard Model (SM). The
value affects theory predictions of particle production cross-sections required
for exploring Higgs-boson properties and searching for New Physics (NP). Its
precise determination is essential for testing the overall consistency of the
SM, to constrain NP models, through precision electroweak fits, and has an
extraordinary impact on the Higgs sector, and on the SM extrapolation to
high-energies. The methodologies, the results, and the main theoretical and
experimental challenges related to the measurements and
combinations at the Large Hadron Collider (LHC) and at the Tevatron are
reviewed and discussed. Finally, the prospects for the improvement of the
precision during the upcoming LHC runs are briefly outlined.Comment: 18 pages, 2 figures, Preprint submitted to Reviews in Physics (REVIP
A Multivariate Training Technique with Event Reweighting
An event reweighting technique incorporated in multivariate training
algorithm has been developed and tested using the Artificial Neural Networks
(ANN) and Boosted Decision Trees (BDT). The event reweighting training are
compared to that of the conventional equal event weighting based on the ANN and
the BDT performance. The comparison is performed in the context of the physics
analysis of the ATLAS experiment at the Large Hadron Collider (LHC), which will
explore the fundamental nature of matter and the basic forces that shape our
universe. We demonstrate that the event reweighting technique provides an
unbiased method of multivariate training for event pattern recognition.Comment: 20 pages, 8 figure
Search for Higgs Bosons in e+e- Collisions at 183 GeV
The data collected by the OPAL experiment at sqrts=183 GeV were used to
search for Higgs bosons which are predicted by the Standard Model and various
extensions, such as general models with two Higgs field doublets and the
Minimal Supersymmetric Standard Model (MSSM). The data correspond to an
integrated luminosity of approximately 54pb-1. None of the searches for neutral
and charged Higgs bosons have revealed an excess of events beyond the expected
background. This negative outcome, in combination with similar results from
searches at lower energies, leads to new limits for the Higgs boson masses and
other model parameters. In particular, the 95% confidence level lower limit for
the mass of the Standard Model Higgs boson is 88.3 GeV. Charged Higgs bosons
can be excluded for masses up to 59.5 GeV. In the MSSM, mh > 70.5 GeV and mA >
72.0 GeV are obtained for tan{beta}>1, no and maximal scalar top mixing and
soft SUSY-breaking masses of 1 TeV. The range 0.8 < tanb < 1.9 is excluded for
minimal scalar top mixing and m{top} < 175 GeV. More general scans of the MSSM
parameter space are also considered.Comment: 49 pages. LaTeX, including 33 eps figures, submitted to European
Physical Journal
Artificial neural networks for selection of pulsar candidates from the radio continuum surveys
Pulsar search with time-domain observation is very computationally expensive
and data volume will be enormous with the next generation telescopes such as
the Square Kilometre Array. We apply artificial neural networks (ANNs), a
machine learning method, for efficient selection of pulsar candidates from
radio continuum surveys, which are much cheaper than time-domain observation.
With observed quantities such as radio fluxes, sky position and compactness as
inputs, our ANNs output the "score" that indicates the degree of likeliness of
an object to be a pulsar. We demonstrate ANNs based on existing survey data by
the TIFR GMRT Sky Survey (TGSS) and the NRAO VLA Sky Survey (NVSS) and test
their performance. Precision, which is the ratio of the number of pulsars
classified correctly as pulsars to that of any objects classified as pulsars,
is about 96. Finally, we apply the trained ANNs to unidentified radio
sources and our fiducial ANN with five inputs (the galactic longitude and
latitude, the TGSS and NVSS fluxes and compactness) generates 2,436 pulsar
candidates from 456,866 unidentified radio sources. These candidates need to be
confirmed if they are truly pulsars by time-domain observations. More
information such as polarization will narrow the candidates down further.Comment: 11 pages, 13 figures, 3 tables, accepted for publication in MNRA
Validation of nonlinear PCA
Linear principal component analysis (PCA) can be extended to a nonlinear PCA
by using artificial neural networks. But the benefit of curved components
requires a careful control of the model complexity. Moreover, standard
techniques for model selection, including cross-validation and more generally
the use of an independent test set, fail when applied to nonlinear PCA because
of its inherent unsupervised characteristics. This paper presents a new
approach for validating the complexity of nonlinear PCA models by using the
error in missing data estimation as a criterion for model selection. It is
motivated by the idea that only the model of optimal complexity is able to
predict missing values with the highest accuracy. While standard test set
validation usually favours over-fitted nonlinear PCA models, the proposed model
validation approach correctly selects the optimal model complexity.Comment: 12 pages, 5 figure
Weight function method for precise determination of top quark mass at Large Hadron Collider
We propose a new method to measure a theoretically well-defined top quark
mass at the LHC. This method is based on the "weight function method," which we
proposed in our preceding paper. It requires only lepton energy distribution
and is basically independent of the production process of the top quark. We
perform a simulation analysis of the top quark mass reconstruction with
pair production and lepton+jets decay channel at the leading order.
The estimated statistical error of the top quark mass is about GeV with
an integrated luminosity of fb at TeV. We also
estimate some of the major systematic uncertainties and find that they are
under good control.Comment: 8 pages, 7 figures, version to appear in PL
Measurement of the forward-backward asymmetry in the distribution of leptons in events in the lepton+jets channel
We present measurements of the forward-backward asymmetry in the angular
distribution of leptons from decays of top quarks and antiquarks produced in
proton-antiproton collisions. We consider the final state containing a lepton
and at least three jets. The entire sample of data collected by the D0
experiment during Run II of the Fermilab Tevatron Collider, corresponding to
9.7 inverse fb of integrated luminosity, is used. The asymmetry measured for
reconstructed leptons is %. When corrected for efficiency and resolution
effects within the lepton rapidity coverage of , the asymmetry is
found to be %.
Combination with the asymmetry measured in the dilepton final state yields
%. We examine the
dependence of on the transverse momentum and rapidity of the lepton.
The results are in agreement with predictions from the next-to-leading-order
QCD generator \mcatnlo, which predicts an asymmetry of % for
.Comment: submitted to Phys. Rev.
- …