855,529 research outputs found

    First- and Second-Order Hypothesis Testing for Mixed Memoryless Sources with General Mixture

    Full text link
    The first- and second-order optimum achievable exponents in the simple hypothesis testing problem are investigated. The optimum achievable exponent for type II error probability, under the constraint that the type I error probability is allowed asymptotically up to epsilon, is called the epsilon-optimum exponent. In this paper, we first give the second-order epsilon-exponent in the case where the null hypothesis and the alternative hypothesis are a mixed memoryless source and a stationary memoryless source, respectively. We next generalize this setting to the case where the alternative hypothesis is also a mixed memoryless source. We address the first-order epsilon-optimum exponent in this setting. In addition, an extension of our results to more general setting such as the hypothesis testing with mixed general source and the relationship with the general compound hypothesis testing problem are also discussed.Comment: 23 page

    Towards a formalism for mapping the spacetimes of massive compact objects: Bumpy black holes and their orbits

    Full text link
    Observations have established that extremely compact, massive objects are common in the universe. It is generally accepted that these objects are black holes. As observations improve, it becomes possible to test this hypothesis in ever greater detail. In particular, it is or will be possible to measure the properties of orbits deep in the strong field of a black hole candidate (using x-ray timing or with gravitational-waves) and to test whether they have the characteristics of black hole orbits in general relativity. Such measurements can be used to map the spacetime of a massive compact object, testing whether the object's multipoles satisfy the strict constraints of the black hole hypothesis. Such a test requires that we compare against objects with the ``wrong'' multipole structure. In this paper, we present tools for constructing bumpy black holes: objects that are almost black holes, but that have some multipoles with the wrong value. The spacetimes which we present are good deep into the strong field of the object -- we do not use a large r expansion, except to make contact with weak field intuition. Also, our spacetimes reduce to the black hole spacetimes of general relativity when the ``bumpiness'' is set to zero. We propose bumpy black holes as the foundation for a null experiment: if black hole candidates are the black holes of general relativity, their bumpiness should be zero. By comparing orbits in a bumpy spacetime with those of an astrophysical source, observations should be able to test this hypothesis, stringently testing whether they are the black holes of general relativity. (Abridged)Comment: 16 pages + 2 appendices + 3 figures. Submitted to PR

    Message framing and source credibility in product advertisements with high consumer involvement

    Get PDF
    The general objective of this research was to analyze the use of message framing and the source appropriately used in product advertisement with high consumer involvement. The category of experimental design used in this study was the lab experiment. The factorial designs in this study experiments were: 2x1 between subjects at the time of testing of hypotheses 1 and 2; 2x2 between subjects at the time of testing of hypothesis 3. The method of analysis used to test hypotheses 1 and 2 used one way ANOVA, and to test hypothesis 3 used n-ways ANOVA with main effects and interaction effect. First, the results showed that there were significant differences in the performance risk perception, psychological risk perception, financial risk perception, and social risk perception on advertisement using positive and negative message framing. Consumers felt a lower risk perception on the advertisement with positive message framing. Thus, the product's advertisements with high consumer involvement will be more effective by using positive message framing. Second, the results also showed that there were significant differences in the risk perception in advertisement using high and low source credibility. Consumers felt a lower risk perception in the advertisement by using high source credibility. Therefore, the product advertisements with high consumer involvement will be more effective by using high source credibility. The last, based on the testing of hypothesis, there was no significant difference in the risk perception in the advertisements using positive and negative message framing and high and low source credibility.peer-reviewe

    US Monetary Policy Rules: the Case for Asymmetric Preferences

    Get PDF
    This paper investigates the empirical relevance of a new framework for monetary policy analysis in which decision makers are allowed to weight differently positive and negative deviations of inflation and output from the target values. The specification of the central bank objective is general enough to nest the symmetric quadratic form as a special case, thereby making the derived policy rule potentially nonlinear. This forms the basis of our identification strategy which is used to evelop a formal hypothesis testing for the presence of asymmetric preferences. Reduced-form estimates of postwar US policy rules indicate that the preferences of the Fed have been highly asymmetric with respect to both inflation and output gaps, with the latter being the dominant source of nonlinearity after 1983.nonlinear optimal monetary policy rules, asymmetric loss function, linearized central bank Euler equation

    Are The Poverty Effects of Trade Policies Invisible?

    Get PDF
    With the advent of the WTO’s Doha Development Agenda, as well as the Millennium Development Goals aiming to reduce poverty by 50 percent by 2015, poverty impacts of trade reforms have attracted increasing attention. This has been particularly true of agricultural trade reform due to the importance of food in the diets of the poor, relatively higher protection in agriculture, as well as the heavy concentration of global poverty in rural areas where agriculture is the main source of income. Yet some in this debate have argued that, given the extreme volatility in agricultural commodity markets, the additional price and poverty impacts due to trade liberalization might well be undetectable. This paper formally tests this “invisibility hypothesis” via stochastic simulation of a computable general equilibrium framework. The hypothesis test is based on the comparison of two sets of price and poverty distributions. The first originates solely from the inherent variability in global staple grains markets, while the second combines the effects of this inherent variability and trade reform. Results indicate that the short-run impacts of trade liberalization on poverty are not distinguishable from market volatility in majority of the fifteen focus countries – suggesting that the poverty impacts of agricultural trade liberalization may indeed be invisible.Trade policy reform, agricultural trade, computable general equilibrium, developing countries, poverty headcount, volatility, stochastic simulation, non-parametric hypothesis testing, Financial Economics, Risk and Uncertainty, C68, F17, I32, Q17, R20,

    Diverse correlation structures in gene expression data and their utility in improving statistical inference

    Full text link
    It is well known that correlations in microarray data represent a serious nuisance deteriorating the performance of gene selection procedures. This paper is intended to demonstrate that the correlation structure of microarray data provides a rich source of useful information. We discuss distinct correlation substructures revealed in microarray gene expression data by an appropriate ordering of genes. These substructures include stochastic proportionality of expression signals in a large percentage of all gene pairs, negative correlations hidden in ordered gene triples, and a long sequence of weakly dependent random variables associated with ordered pairs of genes. The reported striking regularities are of general biological interest and they also have far-reaching implications for theory and practice of statistical methods of microarray data analysis. We illustrate the latter point with a method for testing differential expression of nonoverlapping gene pairs. While designed for testing a different null hypothesis, this method provides an order of magnitude more accurate control of type 1 error rate compared to conventional methods of individual gene expression profiling. In addition, this method is robust to the technical noise. Quantitative inference of the correlation structure has the potential to extend the analysis of microarray data far beyond currently practiced methods.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS120 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Evaluation of the London Measure of Unplanned Pregnancy in a United States population of women

    Get PDF
    Copyright @ 2012 Morof et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Objective: To evaluate the reliability and validity of the London Measure of Unplanned Pregnancy (a U.K.-developed measure of pregnancy intention), in English and Spanish translation, in a U.S. population of women. Methods: A psychometric evaluation study of the London Measure of Unplanned Pregnancy (LMUP), a six-item, self-completion paper measure was conducted with 346 women aged 15–45 who presented to San Francisco General Hospital for termination of pregnancy or antenatal care. Analyses of the two language versions were carried out separately. Reliability (internal consistency) was assessed using Cronbach’s alpha and item-total correlations. Test-retest reliability (stability) was assessed using weighted Kappa. Construct validity was assessed using principal components analysis and hypothesis testing. Results: Psychometric testing demonstrated that the LMUP was reliable and valid in both U.S. English (alpha = 0.78, all item-total correlations .0.20, weighted Kappa = 0.72, unidimensionality confirmed, hypotheses met) and Spanish translation (alpha = 0.84, all item-total correlations .0.20, weighted Kappa = 0.77, unidimensionality confirmed, hypotheses met). Conclusion: The LMUP was reliable and valid in U.S. English and Spanish translation and therefore may now be used with U.S. women.The study was funded by an anonymous donation
    corecore