31,561 research outputs found

    Camera for QUasars in EArly uNiverse (CQUEAN)

    Full text link
    We describe the overall characteristics and the performance of an optical CCD camera system, Camera for QUasars in EArly uNiverse (CQUEAN), which is being used at the 2.1 m Otto Struve Telescope of the McDonald Observatory since 2010 August. CQUEAN was developed for follow-up imaging observations of red sources such as high redshift quasar candidates (z >= 5), Gamma Ray Bursts, brown dwarfs, and young stellar objects. For efficient observations of the red objects, CQUEAN has a science camera with a deep depletion CCD chip which boasts a higher quantum efficiency at 0.7 - 1.1 um than conventional CCD chips. The camera was developed in a short time scale (~ one year), and has been working reliably. By employing an auto-guiding system and a focal reducer to enhance the field of view on the classical Cassegrain focus, we achieve a stable guiding in 20 minute exposures, an imaging quality with FWHM >= 0.6" over the whole field (4.8' * 4.8'), and a limiting magnitude of z = 23.4 AB mag at 5-sigma with one hour total integration time.Comment: Accepted for publication in PASP. 26 pages including 5 tables and 24 figure

    Concomitant patterns of tuberculosis and sarcoidosis

    Get PDF

    Likelihood inferences in animal breeding under selection: a missing-data theory view point

    Get PDF
    Data available in animal breeding are often subject to selection. Such data can be viewed as data with missing values. In this paper, inferences based on likelihoods derived from statistical models for missing data are applied to production records subject to selection. Conditions for ignoring the selection process are discussed.Les données disponibles en génétique animale sont souvent issues d’un processus préalable de sélection. On peut donc considérer comme manquants les attributs (non observés) associés aux individus éliminés, et analyser les données recueillies comme provenant d’un échantillon avec données manquantes. Dans cet article,on développe les méthodes d’inférence fondées sur les vraisemblances, en explicitant dans leur calcul le processus, dû à la sélection, qui induit les données manquantes. On discute les conditions dans lesquelles on peut ignorer la sélection, et donc considérer seulement la vraisemblance des données effectivement recueillies

    Stochastics theory of log-periodic patterns

    Full text link
    We introduce an analytical model based on birth-death clustering processes to help understanding the empirical log-periodic corrections to power-law scaling and the finite-time singularity as reported in several domains including rupture, earthquakes, world population and financial systems. In our stochastics theory log-periodicities are a consequence of transient clusters induced by an entropy-like term that may reflect the amount of cooperative information carried by the state of a large system of different species. The clustering completion rates for the system are assumed to be given by a simple linear death process. The singularity at t_{o} is derived in terms of birth-death clustering coefficients.Comment: LaTeX, 1 ps figure - To appear J. Phys. A: Math & Ge

    Mean Field and the Single Homopolymer

    Full text link
    We develop a statistical model for a confined chain molecule based on a monomer grand canonical ensemble. The molecule is subject to an external chemical potential, a backbone interaction, and an attractive interaction between all monomers. Using a Gaussian variable formalism and a mean field approximation, we analytically derive a minimum principle from which we can obtain relevant physical quantities, such as the monomer density, and we explore the limit in which the chain is subject to a tight confinement. Through a numerical implementation of the minimization process we show how we can obtain density profiles in three dimensions for arbitraty potentials, and we test the limits of validity of the theory.Comment: 15 pages, 7 figure

    Empirical Bayes estimation of parameters for n polygenic binary traits

    Get PDF
    The conditional probability of an observation in a subpopulation i (a combination of levels of explanatory variables) falling into one of 2n mutually exclusive and exhaustive categories is modelled using a normal integral in n-dimensions. The mean of subpopulation i is written as a linear combination of an unknown vector θ which can include « fixed » effects (e.g., nuisance environmental effects, genetic group effects) and « random » effects such as additive genetic value or producing ability. Conditionally on θ, the normal integral depends on an unknown matrix R comprising residual correlations in a multivariate standard normal conceptual scale. The random variables in θ have a dispersion matrix G X A, where usually A is a known matrix of additive genetic relationships, and G is a matrix of unknown genetic variances and covariances. It is assumed a priori that θ follows a multivariate normal distribution f (θ | G), which does not depend on R, and the likelihood function is taken as product multinomial. The point estimator of θ is the mode of the posterior distribution f (θ | Y, G = G*, R = R*) where Y is data, and G* and R* are the components of the mode of the marginal posterior distribution f (G, R | Y) using « flat » priors for G and R. The matrices G* and R* correspond to the marginal maximum likelihood estimators of the corresponding matrices. The point estimator of θ is of the empirical Bayes types. Overall, computations involve solving 3 non-linear systems in θ, G and R. G* can be computed with an expectation-maximization type algorithm ; an estimator of R* is suggested, and this is related to results published elsewhere on maximum likelihood estimation in contingency tables. Problems discussed include non-linearity, size of the system to be solved, rate of convergence, approximations made and the possible use of informative priors for the dispersion parameters.La probabilité conditionnelle qu’une observation d’une sous-population donnée (combinaison de niveaux de facteurs) se trouve dans l’une des 2" catégories possibles de réponse (exclusives et exhaustives) est modélisée par une intégrale normale à n-dimensions. La moyenne de la ﺎe sous population s’écrit comme une combinaison linéaire d’un vecteur θ de paramètres inconnus qui peuvent comprendre des effets « fixes » (effets de milieu parasites, effets de groupe génétique) et des effets aléatoires (valeur génétique additive ou aptitude à la production). Sachant θ, l’intégrale normale dépend d’une matrice inconnue R fonction des corrélations résiduelles entre les n variables normales sous-jacentes standardisées. Les effets aléatoires de θ présentent une matrice de dispersion de la forme G X A où A est généralement une matrice connue de parenté et G une matrice inconnue de variances et covariances génétiques. On suppose qu’a priori θ suit une loi multinormale de densité f (θ | G) qui ne dépend pas de R. La vraisemblance s’exprime alors comme un produit de multinomiales. L’estimateur de position de θ est défini comme le mode de la distribution a posteriori f (θ | Y, G = G*, R = R*) où Y est le vecteur des données, G* et R* sont les composantes du mode de la distribution marginale f (G, R | Y) avec des a priori uniformes pour G et R. G* et R* correspondent alors aux estimateurs du maximum de vraisemblance marginale et θ à un estimateur de type bayésien empirique. Les calculs impliquent la résolution de 3 systèmes non-linéaires en θ, G et R. G* se calcule selon un algorithme de type E.M. Une approximation de R* est suggérée en relation avec des résultats antérieurs publiés à propos d’une estimation du maximum de vraisemblance pour les tables de contingence. Divers problèmes sont abordés en discussion tels que la non-linéarité, la taille du système à résoudre, la vitesse de convergence, le degré d’approximation et l’emploi possible d’a priori informatifs pour les paramètres de dispersion
    • …
    corecore