11,026 research outputs found

    Minimal violation of flavour and custodial symmetries in a vectophobic Two-Higgs-Doublet-Model

    Get PDF
    Tree-level accidental symmetries are known to play a fundamental role in the phenomenology of the Standard Model (SM) for electroweak interactions. So far, no significant deviations from the theory have been observed in precision, flavour and collider physics. Consequently, these global symmetries are expected to remain quite efficient in any attempt beyond the SM. Yet, they do not forbid rather unorthodox phenomena within the reach of current LHC experiments. This is illustrated with a vectophobic Two-Higgs-Doublet-Model (2HDM) where effects of a light, flavour-violating and custodian (pseudo)scalar might be observed in the Bs→μ+μ−B_s\to\mu^+\mu^- decay rate and in the diphoton invariant mass spectrum at around 125 GeV.Comment: 13 pages, 3 figures, v2: constraints from B_s -> mu+mu- updated, references added, to appear in Phys.Lett.

    Survey on the chemical composition of several tropical wood species

    Get PDF
    Variability in the chemical composition of 614 species is described in a database containing measurements of wood polymers (cellulose, lignin and pentosan), as well as overall extraneous components (ethanol-benzene, or hot water extracts and ash, with a focus on silica content). These measurements were taken between 1945 and 1990 using the same standard protocol. In all, 1,194 trees belonging to 614 species, 358 genera and 89 families were measured. At species level, variability (quantified by the coefficient of variation) was rather high for density (27%), much lower for lignin and cellulose (14% and 10%) and much higher for ethanol/benzene extractives, hot water extractives and ash content (81%, 60% and 76%). Considering trees with at least five different specimens, and species with at least 10 different trees, it was possible to investigate within-tree and withinspecies variability. Large differences were found between trees of a given species for extraneous components, and more than one tree should be needed per species. For density, lignin, pentosan and cellulose, the distribution of values was nearly symmetrical, with mean values of 720 kg/m3 for density, 29.1% for lignin, 15.8% for pentosan, and 42.4% for cellulose. There were clear differences between species for lignin content. For extraneous components, the distribution was very dissymmetrical, with a minority of woods rich in this component composing the high value tail. A high value for any extraneous component, even in only one tree, is sufficient to classify the species in respect of that component. Siliceous woods identified by silica bodies in anatomy have a very high silica content and only those species deserve a silica study

    Parametric modeling of photometric signals

    Get PDF
    This paper studies a new model for photometric signals under high flux assumption. Photometric signals are modeled by Gaussian autoregressive processes having the same mean and variance denoted Constraint Gaussian Autoregressive Processes (CGARP's). The estimation of the CGARP parameters is discussed. The Cramér Rao lower bounds for these parameters are studied and compared to the estimator mean square errors. The CGARP is intended to model the signal received by a satellite designed for extrasolar planets detection. A transit of a planet in front of a star results in an abrupt change in the mean and variance of the CGARP. The Neyman–Pearson detector for this changepoint detection problem is derived when the abrupt change parameters are known. Closed form expressions for the Receiver Operating Characteristics (ROC) are provided. The Neyman–Pearson detector combined with the maximum likelihood estimator for CGARP parameters allows to study the generalized likelihood ratio detector. ROC curves are then determined using computer simulations

    Consistency of random forests

    Get PDF
    Random forests are a learning algorithm proposed by Breiman [Mach. Learn. 45 (2001) 5--32] that combines several randomized decision trees and aggregates their predictions by averaging. Despite its wide usage and outstanding practical performance, little is known about the mathematical properties of the procedure. This disparity between theory and practice originates in the difficulty to simultaneously analyze both the randomization process and the highly data-dependent tree structure. In the present paper, we take a step forward in forest exploration by proving a consistency result for Breiman's [Mach. Learn. 45 (2001) 5--32] original algorithm in the context of additive regression models. Our analysis also sheds an interesting light on how random forests can nicely adapt to sparsity. 1. Introduction. Random forests are an ensemble learning method for classification and regression that constructs a number of randomized decision trees during the training phase and predicts by averaging the results. Since its publication in the seminal paper of Breiman (2001), the procedure has become a major data analysis tool, that performs well in practice in comparison with many standard methods. What has greatly contributed to the popularity of forests is the fact that they can be applied to a wide range of prediction problems and have few parameters to tune. Aside from being simple to use, the method is generally recognized for its accuracy and its ability to deal with small sample sizes, high-dimensional feature spaces and complex data structures. The random forest methodology has been successfully involved in many practical problems, including air quality prediction (winning code of the EMC data science global hackathon in 2012, see http://www.kaggle.com/c/dsg-hackathon), chemoinformatics [Svetnik et al. (2003)], ecology [Prasad, Iverson and Liaw (2006), Cutler et al. (2007)], 3
    • …
    corecore