24,097 research outputs found

    Perturbative QCD effects and the search for a H->WW->l nu l nu signal at the Tevatron

    Full text link
    The Tevatron experiments have recently excluded a Standard Model Higgs boson in the mass range 160 - 170 GeV at the 95% confidence level. This result is based on sophisticated analyses designed to maximize the ratio of signal and background cross-sections. In this paper we study the production of a Higgs boson of mass 160 GeV in the gg -> H -> WW -> l nu l nu channel. We choose a set of cuts like those adopted in the experimental analysis and compare kinematical distributions of the final state leptons computed in NNLO QCD to lower-order calculations and to those obtained with the event generators PYTHIA, HERWIG and MC@NLO. We also show that the distribution of the output from an Artificial Neural Network obtained with the different tools does not show significant differences. However, the final acceptance computed with PYTHIA is smaller than those obtained at NNLO and with HERWIG and MC@NLO. We also investigate the impact of the underlying event and hadronization on our results.Comment: Extra discussion and references adde

    A System for Induction of Oblique Decision Trees

    Full text link
    This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees.Comment: See http://www.jair.org/ for an online appendix and other files accompanying this articl

    History-based action selection bias in posterior parietal cortex.

    Get PDF
    Making decisions based on choice-outcome history is a crucial, adaptive ability in life. However, the neural circuit mechanisms underlying history-dependent decision-making are poorly understood. In particular, history-related signals have been found in many brain areas during various decision-making tasks, but the causal involvement of these signals in guiding behavior is unclear. Here we addressed this issue utilizing behavioral modeling, two-photon calcium imaging, and optogenetic inactivation in mice. We report that a subset of neurons in the posterior parietal cortex (PPC) closely reflect the choice-outcome history and history-dependent decision biases, and PPC inactivation diminishes the history dependency of choice. Specifically, many PPC neurons show history- and bias-tuning during the inter-trial intervals (ITI), and history dependency of choice is affected by PPC inactivation during ITI and not during trial. These results indicate that PPC is a critical region mediating the subjective use of history in biasing action selection

    Discrimination of Semi-Quantitative Models by Experiment Selection: Method and Application in Population Biology

    Get PDF
    Modeling an experimental system often results in a number of alternative models that are justified equally well by the experimental data. In order to discriminate between these models, additional experiments are needed. We present a method for the discrimination of models in the form of semiquantitative differential equations. The method is a generalization of previous work in model discrimination. It is based on an entropy criterion for the selection of the most informative experiment which can handle cases where the models predict multiple qualitative behaviors. The applicability of the method is demonstrated on a real-life example, the discrimination of a set of competing models of the growth of phytoplankton in a bioreactor

    Statistical inference optimized with respect to the observed sample for single or multiple comparisons

    Full text link
    The normalized maximum likelihood (NML) is a recent penalized likelihood that has properties that justify defining the amount of discrimination information (DI) in the data supporting an alternative hypothesis over a null hypothesis as the logarithm of an NML ratio, namely, the alternative hypothesis NML divided by the null hypothesis NML. The resulting DI, like the Bayes factor but unlike the p-value, measures the strength of evidence for an alternative hypothesis over a null hypothesis such that the probability of misleading evidence vanishes asymptotically under weak regularity conditions and such that evidence can support a simple null hypothesis. Unlike the Bayes factor, the DI does not require a prior distribution and is minimax optimal in a sense that does not involve averaging over outcomes that did not occur. Replacing a (possibly pseudo-) likelihood function with its weighted counterpart extends the scope of the DI to models for which the unweighted NML is undefined. The likelihood weights leverage side information, either in data associated with comparisons other than the comparison at hand or in the parameter value of a simple null hypothesis. Two case studies, one involving multiple populations and the other involving multiple biological features, indicate that the DI is robust to the type of side information used when that information is assigned the weight of a single observation. Such robustness suggests that very little adjustment for multiple comparisons is warranted if the sample size is at least moderate.Comment: Typo in equation (7) of v2 corrected in equation (6) of v3; clarity improve

    Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema

    No full text
    In this paper, a psychologically-inspired binary cascade classification schema is proposed for speech emotion recognition. Performance is enhanced because commonly confused pairs of emotions are distinguishable from one another. Extracted features are related to statistics of pitch, formants, and energy contours, as well as spectrum, cepstrum, perceptual and temporal features, autocorrelation, MPEG-7 descriptors, Fujisakis model parameters, voice quality, jitter, and shimmer. Selected features are fed as input to K nearest neighborhood classifier and to support vector machines. Two kernels are tested for the latter: Linear and Gaussian radial basis function. The recently proposed speaker-independent experimental protocol is tested on the Berlin emotional speech database for each gender separately. The best emotion recognition accuracy, achieved by support vector machines with linear kernel, equals 87.7%, outperforming state-of-the-art approaches. Statistical analysis is first carried out with respect to the classifiers error rates and then to evaluate the information expressed by the classifiers confusion matrices. © Springer Science+Business Media, LLC 2011
    corecore