2,725 research outputs found
First measurement of time-dependent CP violation in B0s→K+K− decays
Direct and mixing-induced CP-violating asymmetries in B0s→K+K− decays are measured for the first time using a data sample of pp collisions, corresponding to an integrated luminosity of 1.0 fb−1, collected with the LHCb detector at a centre-of-mass energy of 7 TeV. The results are C KK = 0.14 ± 0.11 ± 0.03 and S KK = 0.30 ± 0.12 ± 0.04, where the first uncertainties are statistical and the second systematic. The corresponding quantities are also determined for B 0 → π + π − decays to be C ππ = −0.38 ± 0.15 ± 0.02 and S ππ = −0.71 ± 0.13 ± 0.02, in good agreement with existing measurements
Testing Invisible Momentum Ansatze in Missing Energy Events at the LHC
We consider SUSY-like events with two decay chains, each terminating in an
invisible particle, whose true energy and momentum are not measured in the
detector. Nevertheless, a useful educated guess about the invisible momenta can
still be obtained by optimizing a suitable invariant mass function. We review
and contrast several proposals in the literature for such ansatze: four
versions of the M_T2-assisted on-shell reconstruction (MAOS), as well as
several variants of the on-shell constrained M_2 variables. We compare the
performance of these methods with regards to the mass determination of a new
particle resonance along the decay chain from the peak of the reconstructed
invariant mass distribution. For concreteness, we consider the event topology
of dilepton ttbar events and study each of the three possible subsystems, in
both a ttbar and a SUSY example. We find that the M_2 variables generally
provide sharper peaks and therefore better ansatze for the invisible momenta.
We show that the performance can be further improved by preselecting events
near the kinematic endpoint of the corresponding variable from which the
momentum ansatz originates.Comment: 38 pages, 15 figure
Measurement of the ttbar production cross section in pbarp collisions at sqrt(s) = 1.96 TeV using secondary vertex b tagging
We report a new measurement of the ttbar production cross section in pbarp
collisions at a center-of-mass energy of 1.96 TeV using events with one charged
lepton (electron or muon), missing transverse energy, and jets. Using 425
pb^{-1} of data collected using the D0 detector at the Fermilab Tevatron
Collider, and enhancing the ttbar content of the sample by tagging b jets with
a secondary vertex tagging algorithm, the ttbar production cross section is
measured to be: 6.6 \pm 0.9 (stat+syst) \pm 0.4 (lum) pb. This cross section is
the most precise D0 measurement to date for ttbar production and is in good
agreement with standard model expectations.Comment: 25 pages, 13 figure
Recommended from our members
Performance of photon reconstruction and identification with the CMS detector in proton-proton collisions at √s = 8 TeV
A description is provided of the performance of the CMS detector for photon reconstruction and identification in proton-proton collisions at a centre-of-mass energy of 8 TeV at the CERN LHC. Details are given on the reconstruction of photons from energy deposits in the electromagnetic calorimeter (ECAL) and the extraction of photon energy estimates. The reconstruction of electron tracks from photons that convert to electrons in the CMS tracker is also described, as is the optimization of the photon energy reconstruction and its accurate modelling in simulation, in the analysis of the Higgs boson decay into two photons. In the barrel section of the ECAL, an energy resolution of about 1% is achieved for unconverted or late-converting photons from Hγγ decays. Different photon identification methods are discussed and their corresponding selection efficiencies in data are compared with those found in simulated events
An Introduction to Recursive Partitioning: Rationale, Application and Characteristics of Classification and Regression Trees, Bagging and Random Forests
Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, that can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine and bioinformatics within the past few years.
High dimensional problems are common not only in genetics, but also in some areas of psychological research, where only few subjects can be measured due to time or cost constraints, yet a large amount of data is generated for each subject. Random forests have been shown to achieve a high prediction accuracy in such applications, and provide descriptive variable importance measures reflecting the impact of each variable in both main effects and interactions.
The aim of this work is to introduce the principles of the standard recursive partitioning methods as well as recent methodological improvements, to illustrate their usage for low and high dimensional data exploration, but also to point out limitations of the methods and potential pitfalls in their practical application.
Application of the methods is illustrated using freely available implementations in the R system for statistical computing
Feature-driven Emergence of Model Graphs for Object Recognition and Categorization
An important requirement for the expression of cognitive structures
is the ability to form mental objects by rapidly binding together
constituent parts. In this sense, one may conceive the brain\u27s data
structure to have the form of graphs whose nodes are labeled with
elementary features. These provide a versatile data format with the
additional ability to render the structure of any mental object.
Because of the multitude of possible object variations the graphs
are required to be dynamic. Upon presentation of an image a
so-called model graph should rapidly emerge by binding together
memorized subgraphs derived from earlier learning examples driven by the
image features. In this model, the richness and flexibility of the
mind is made possible by a combinatorical game of immense
complexity. Consequently, the emergence of model graphs is a
laborious task which, in computer vision, has most often been
disregarded in favor of employing model graphs tailored to specific
object categories like, for instance, faces in frontal pose.
Recognition or categorization of arbitrary objects, however, demands
dynamic graphs.
In this work we propose a form of graph dynamics, which proceeds in
two steps. In the first step component classifiers, which decide
whether a feature is present in an image, are learned from training
images. For processing arbitrary objects, features are small
localized grid graphs, so-called parquet graphs, whose nodes are
attributed with Gabor amplitudes. Through combination of these
classifiers into a linear discriminant that conforms to Linsker\u27s
infomax principle a weighted majority voting scheme is implemented.
It allows for preselection of salient learning examples, so-called
model candidates, and likewise for preselection of categories the
object in the presented image supposably belongs to. Each model
candidate is verified in a second step using a variant of elastic
graph matching, a standard correspondence-based technique for face
and object recognition. To further differentiate between model
candidates with similar features it is asserted that the features be
in similar spatial arrangement for the model to be selected. Model
graphs are constructed dynamically by assembling model features into
larger graphs according to their spatial arrangement. From the
viewpoint of pattern recognition, the presented technique is a
combination of a discriminative (feature-based) and a generative
(correspondence-based) classifier while the majority voting scheme
implemented in the feature-based part is an extension of existing
multiple feature subset methods.
We report the results of experiments on standard databases for
object recognition and categorization. The method achieved high
recognition rates on identity, object category, pose, and
illumination type. Unlike many other models the presented
technique can also cope with varying background, multiple objects,
and partial occlusion
Normal-Mixture-of-Inverse-Gamma Priors for Bayesian Regularization and Model Selection in Structured Additive Regression Models
In regression models with many potential predictors, choosing an appropriate subset of covariates and their interactions at the same time as determining whether linear or more flexible functional forms are required is a challenging and important task. We propose a spike-and-slab prior structure in order to include or exclude single coefficients as well as blocks of coefficients associated
with factor variables, random effects or basis expansions
of smooth functions. Structured additive models with this prior structure are estimated with Markov Chain Monte Carlo using a redundant multiplicative parameter expansion. We discuss shrinkage properties of the novel prior induced by the redundant parameterization, investigate its sensitivity to hyperparameter settings and compare performance of the proposed method in terms of model selection, sparsity recovery, and estimation error for Gaussian, binomial and Poisson responses on real and simulated data sets with that of component-wise boosting and other approaches
- …