471 research outputs found
Searches for Lepton Flavour Violation and Lepton Number Violation in Hadron Decays
In the Standard Model of particle physics, lepton flavour and lepton number
are conserved quantities although no fundamental symmetry demands their
conservation. I present recent results of searches for lepton flavour and
lepton number violating hadron decays measured at the B factories and LHCb.
In addition, the LHCb collaboration has recently performed a search for the
lepton flavour violating decay \tau^- \to \mu^-\mu^-\mu^+. The obtained upper
exclusion limit, that has been presented in this talk for the first time, is of
the same order of magnitude as those observed at the B factories. This is the
first search for a lepton flavour violating \tau decay at a hadron collider.Comment: Presented at Flavor Physics and CP Violation (FPCP 2012), Hefei,
China, May 21-25, 201
The search for tau -> mu mu mu at LHCb
The charged lepton flavour violating decay tau -> mu mu mu is searched for, using the LHCb experiment. Violation of lepton flavour in the charged lepton sector is unobserved to date. Within the Standard Model of particle physics including neutrino oscillation, the branching fraction is expected to be umeasureable small and an observation would be an unambiguous sign for physics beyond the Standard Model.
Over 10^11 tau leptons have been produced in proton-proton collisions at LHCb during the first run of the LHC. Most of them in decays of Ds mesons. Compared to previous experiments at electron-positron colliders, the signature of tau -> mu mu mu is harder to identify in hadronic collisions and background processes are more abundant.
A multivariate event classification has been developed to distinguish a possible signal from background events. The number of tau leptons produced in the LHCb acceptance is estimated by measuring the yield of Ds -> phi(mu mu) pi decays. The sensitivity reached by analysing LHCb data corresponding to 3 fb^-1 is sufficient to constrain the branching fraction of tau -> mu mu mu to be smaller than 7.1 x 10^-8 at 90% confidence level
Search for the lepton-flavor-violating decays Bs0→e±μ∓ and B0→e±μ∓
A search for the lepton-flavor-violating decays Bs0→e±μ∓ and B0→e±μ∓ is performed with a data sample, corresponding to an integrated luminosity of 1.0 fb-1 of pp collisions at √s=7 TeV, collected by the LHCb experiment. The observed number of Bs0→e±μ∓ and B0→e±μ∓ candidates is consistent with background expectations. Upper limits on the branching fractions of both decays are determined to be B(Bs0→e±μ∓)101 TeV/c2 and MLQ(B0→e±μ∓)>126 TeV/c2 at 95% C.L., and are a factor of 2 higher than the previous bounds
Analysis of the real EADGENE data set: Comparison of methods and guidelines for data normalisation and selection of differentially expressed genes (Open Access publication)
A large variety of methods has been proposed in the literature for microarray data analysis. The aim of this paper was to present techniques used by the EADGENE (European Animal Disease Genomics Network of Excellence) WP1.4 participants for data quality control, normalisation and statistical methods for the detection of differentially expressed genes in order to provide some more general data analysis guidelines. All the workshop participants were given a real data set obtained in an EADGENE funded microarray study looking at the gene expression changes following artificial infection with two different mastitis causing bacteria: Escherichia coli and Staphylococcus aureus. It was reassuring to see that most of the teams found the same main biological results. In fact, most of the differentially expressed genes were found for infection by E. coli between uninfected and 24 h challenged udder quarters. Very little transcriptional variation was observed for the bacteria S. aureus. Lists of differentially expressed genes found by the different research teams were, however, quite dependent on the method used, especially concerning the data quality control step. These analyses also emphasised a biological problem of cross-talk between infected and uninfected quarters which will have to be dealt with for further microarray studies
Les droits disciplinaires des fonctions publiques : « unification », « harmonisation » ou « distanciation ». A propos de la loi du 26 avril 2016 relative à la déontologie et aux droits et obligations des fonctionnaires
The production of tt‾ , W+bb‾ and W+cc‾ is studied in the forward region of proton–proton collisions collected at a centre-of-mass energy of 8 TeV by the LHCb experiment, corresponding to an integrated luminosity of 1.98±0.02 fb−1 . The W bosons are reconstructed in the decays W→ℓν , where ℓ denotes muon or electron, while the b and c quarks are reconstructed as jets. All measured cross-sections are in agreement with next-to-leading-order Standard Model predictions.The production of , and is studied in the forward region of proton-proton collisions collected at a centre-of-mass energy of 8 TeV by the LHCb experiment, corresponding to an integrated luminosity of 1.98 0.02 \mbox{fb}^{-1}. The bosons are reconstructed in the decays , where denotes muon or electron, while the and quarks are reconstructed as jets. All measured cross-sections are in agreement with next-to-leading-order Standard Model predictions
Measurement of the track reconstruction efficiency at LHCb
The determination of track reconstruction efficiencies at LHCb using J/ψ→μ+μ- decays is presented. Efficiencies above 95% are found for the data taking periods in 2010, 2011, and 2012. The ratio of the track reconstruction efficiency of muons in data and simulation is compatible with unity and measured with an uncertainty of 0.8 % for data taking in 2010, and at a precision of 0.4 % for data taking in 2011 and 2012. For hadrons an additional 1.4 % uncertainty due to material interactions is assumed. This result is crucial for accurate cross section and branching fraction measurements in LHCb
Physics case for an LHCb Upgrade II - Opportunities in flavour physics, and beyond, in the HL-LHC era
The LHCb Upgrade II will fully exploit the flavour-physics opportunities of the HL-LHC, and study additional physics topics that take advantage of the forward acceptance of the LHCb spectrometer. The LHCb Upgrade I will begin operation in 2020. Consolidation will occur, and modest enhancements of the Upgrade I detector will be installed, in Long Shutdown 3 of the LHC (2025) and these are discussed here. The main Upgrade II detector will be installed in long shutdown 4 of the LHC (2030) and will build on the strengths of the current LHCb experiment and the Upgrade I. It will operate at a luminosity up to 2×1034
cm−2s−1, ten times that of the Upgrade I detector. New detector components will improve the intrinsic performance of the experiment in certain key areas. An Expression Of Interest proposing Upgrade II was submitted in February 2017. The physics case for the Upgrade II is presented here in more depth. CP-violating phases will be measured with precisions unattainable at any other envisaged facility. The experiment will probe b → sl+l−and b → dl+l− transitions in both muon and electron decays in modes not accessible at Upgrade I. Minimal flavour violation will be tested with a precision measurement of the ratio of B(B0 → μ+μ−)/B(Bs → μ+μ−). Probing charm CP violation at the 10−5 level may result in its long sought discovery. Major advances in hadron spectroscopy will be possible, which will be powerful probes of low energy QCD. Upgrade II potentially will have the highest sensitivity of all the LHC experiments on the Higgs to charm-quark couplings. Generically, the new physics mass scale probed, for fixed couplings, will almost double compared with the pre-HL-LHC era; this extended reach for flavour physics is similar to that which would be achieved by the HE-LHC proposal for the energy frontier
LHCb upgrade software and computing : technical design report
This document reports the Research and Development activities that are carried out in the software and computing domains in view of the upgrade of the LHCb experiment. The implementation of a full software trigger implies major changes in the core software framework, in the event data model, and in the reconstruction algorithms. The increase of the data volumes for both real and simulated datasets requires a corresponding scaling of the distributed computing infrastructure. An implementation plan in both domains is presented, together with a risk assessment analysis
- …