1,169 research outputs found

    Nuclear physics insights for new-physics searches using nuclei: Neutrinoless ββ\beta\beta decay and dark matter direct detection

    Full text link
    Experiments using nuclei to probe new physics beyond the Standard Model, such as neutrinoless ββ\beta\beta decay searches testing whether neutrinos are their own antiparticle, and direct detection experiments aiming to identify the nature of dark matter, require accurate nuclear physics input for optimizing their discovery potential and for a correct interpretation of their results. This demands a detailed knowledge of the nuclear structure relevant for these processes. For instance, neutrinoless ββ\beta\beta decay nuclear matrix elements are very sensitive to the nuclear correlations in the initial and final nuclei, and the spin-dependent nuclear structure factors of dark matter scattering depend on the subtle distribution of the nuclear spin among all nucleons. In addition, nucleons are composite and strongly interacting, which implies that many-nucleon processes are necessary for a correct description of nuclei and their interactions. It is thus crucial that theoretical studies and experimental analyses consider β\beta decays and dark matter interactions with a coupling to two nucleons, called two-nucleon currents.Comment: 11 pages, 5 figures, invited parallel talk at the XIIth Quark Confinement & the Hadron Spectrum conference, Thessaloniki, Greece, 201

    Gamow-Teller and double-beta decays of heavy nuclei within an effective theory

    Full text link
    We study β\beta decays within an effective theory that treats nuclei as a spherical collective core with an even number of neutrons and protons that can couple to an additional neutron and/or proton. First we explore Gamow-Teller β\beta decays of parent odd-odd nuclei into low-lying ground-, one-, and two-phonon states of the daughter even-even system. The low-energy constants of the effective theory are adjusted to data on β\beta decays to ground states or Gamow-Teller strengths. The corresponding theoretical uncertainty is estimated based on the power counting of the effective theory. For a variety of medium-mass and heavy isotopes the theoretical matrix elements are in good agreement with experiment within the theoretical uncertainties. We then study the two-neutrino double-β\beta decay into ground and excited states. The results are remarkably consistent with experiment within theoretical uncertainties, without the necessity to adjust any low-energy constants.Comment: 17 pages, 5 figures, results extended to two-neutrino double beta-minus decays and two-neutrino double electron-capture decays to excited 2+ states, matches published versio

    Estimating ability from items isomorphs: effects on the reliability of the test scores

    Full text link
    En este artículo se aborda el problema de la imprecisión en los parámetros de los ítems y su efecto las puntuaciones de los sujetos en los tests adaptativos. En particular se considera la imprecisión introducida en el test por el proceso de incluir ítems isomorfos en el banco. La investigación se lleva a cabo mediante un estudio de Monte Carlo en el que la precisión se calcula bajo diferentes niveles de error en los parámetros de los ítems. Los resultados indican que el proceso de creación de isomorfos puede ser una alternativa viable, pero es necesario previamente obtener una estimación del error introducido por dicho procesoThis article focuses on the errors of the item parameter estimates and their effect on the reliability of the test scores. In particular we consider the errors introduced by the process of creating item isomorphs. The research is conducted by means of a Monte Carlo simulation. The simulation includes several conditions r egarding the size of the errors and the number of isomorphs in the item pool of an adaptive test. The results show that the errors due to the isomorphing process do not compromise the psychometric status of the test scores except in the condition with highest errorsEsta investigación ha sido financiada en parte por el proyecto de la DGICYT PB 97-004

    Logistic response models with item interactions

    Full text link
    Items that are clustered according to shared content may violate the principle of conditional independence commonly used in item response theory. This paper investigates the capabilities of a logistic item response model in relation to locally dependent item responses. The model includes main effect and interaction parameters that are computed as linear functions of the latent trait. The paper explains the interpretation of the parameters, the maximum likelihood estimation algorithm, the information matrix and some results concerning parameter identifiability. The problem of over-fitting the data is addressed in a simulation study, and two real data examples are described to illustrate the approach, one from the context of a sample survey and the other from ability testing using testlets

    Bayesian dimensionality assessment for the multidimensional nominal response model

    Full text link
    This article introduces Bayesian estimation and evaluation procedures for the multidimensional nominal response model. The utility of this model is to perform a nominal factor analysis of items that consist of a finite number of unordered response categories. The key aspect of the model, in comparison with traditional factorial model, is that there is a slope for each response category on the latent dimensions, instead of having slopes associated to the items. The extended parameterization of the multidimensional nominal response model requires large samples for estimation. When sample size is of a moderate or small size, some of these parameters may be weakly empirically identifiable and the estimation algorithm may run into difficulties. We propose a Bayesian MCMC inferential algorithm to estimate the parameters and the number of dimensions underlying the multidimensional nominal response model. Two Bayesian approaches to model evaluation were compared: discrepancy statistics (DIC, WAICC, and LOO) that provide an indication of the relative merit of different models, and the standardized generalized discrepancy measure that requires resampling data and is computationally more involved. A simulation study was conducted to compare these two approaches, and the results show that the standardized generalized discrepancy measure can be used to reliably estimate the dimensionality of the model whereas the discrepancy statistics are questionable. The paper also includes an example with real data in the context of learning styles, in which the model is used to conduct an exploratory factor analysis of nominal dataThis research was partially supported by grants PSI2012-31958 and PSI2015-66366-P from the Ministerio de Economía y Competitividad (Spain
    corecore