34,919 research outputs found

    Instrumental and Analytic Methods for Bolometric Polarimetry

    Get PDF
    We discuss instrumental and analytic methods that have been developed for the first generation of bolometric cosmic microwave background (CMB) polarimeters. The design, characterization, and analysis of data obtained using Polarization Sensitive Bolometers (PSBs) are described in detail. This is followed by a brief study of the effect of various polarization modulation techniques on the recovery of sky polarization from scanning polarimeter data. Having been successfully implemented on the sub-orbital Boomerang experiment, PSBs are currently operational in two terrestrial CMB polarization experiments (QUaD and the Robinson Telescope). We investigate two approaches to the analysis of data from these experiments, using realistic simulations of time ordered data to illustrate the impact of instrumental effects on the fidelity of the recovered polarization signal. We find that the analysis of difference time streams takes full advantage of the high degree of common mode rejection afforded by the PSB design. In addition to the observational efforts currently underway, this discussion is directly applicable to the PSBs that constitute the polarized capability of the Planck HFI instrument.Comment: 23 pages, 11 figures. for submission to A&

    Iterative destriping and photometric calibration for Planck-HFI, polarized, multi-detector map-making

    Full text link
    We present an iterative scheme designed to recover calibrated I, Q, and U maps from Planck-HFI data using the orbital dipole due to the satellite motion with respect to the Solar System frame. It combines a map reconstruction, based on a destriping technique, juxtaposed with an absolute calibration algorithm. We evaluate systematic and statistical uncertainties incurred during both these steps with the help of realistic, Planck-like simulations containing CMB, foreground components and instrumental noise, and assess the accuracy of the sky map reconstruction by considering the maps of the residuals and their spectra. In particular, we discuss destriping residuals for polarization sensitive detectors similar to those of Planck-HFI under different noise hypotheses and show that these residuals are negligible (for intensity maps) or smaller than the white noise level (for Q and U Stokes maps), for l > 50. We also demonstrate that the combined level of residuals of this scheme remains comparable to those of the destriping-only case except at very low l where residuals from the calibration appear. For all the considered noise hypotheses, the relative calibration precision is on the order of a few 10e-4, with a systematic bias of the same order of magnitude.Comment: 18 pages, 21 figures. Match published versio

    Map-making in small field modulated CMB polarisation experiments: approximating the maximum-likelihood method

    Full text link
    Map-making presents a significant computational challenge to the next generation of kilopixel CMB polarisation experiments. Years worth of time ordered data (TOD) from thousands of detectors will need to be compressed into maps of the T, Q and U Stokes parameters. Fundamental to the science goal of these experiments, the observation of B-modes, is the ability to control noise and systematics. In this paper, we consider an alternative to the maximum-likelihood method, called destriping, where the noise is modelled as a set of discrete offset functions and then subtracted from the time-stream. We compare our destriping code (Descart: the DEStriping CARTographer) to a full maximum-likelihood map-maker, applying them to 200 Monte-Carlo simulations of time-ordered data from a ground based, partial-sky polarisation modulation experiment. In these simulations, the noise is dominated by either detector or atmospheric 1/f noise. Using prior information of the power spectrum of this noise, we produce destriped maps of T, Q and U which are negligibly different from optimal. The method does not filter the signal or bias the E or B-mode power spectra. Depending on the length of the destriping baseline, the method delivers between 5 and 22 times improvement in computation time over the maximum-likelihood algorithm. We find that, for the specific case of single detector maps, it is essential to destripe the atmospheric 1/f in order to detect B-modes, even though the Q and U signals are modulated by a half-wave plate spinning at 5-Hz.Comment: 18 pages, 17 figures, MNRAS accepted v2: content added (inc: table 2), typos correcte

    Selection of Ordinally Scaled Independent Variables

    Get PDF
    Ordinal categorial variables are a common case in regression modeling. Although the case of ordinal response variables has been well investigated, less work has been done concerning ordinal predictors. This article deals with the selection of ordinally scaled independent variables in the classical linear model, where the ordinal structure is taken into account by use of a difference penalty on adjacent dummy coefficients. It is shown how the Group Lasso can be used for the selection of ordinal predictors, and an alternative blockwise Boosting procedure is proposed. Emphasis is placed on the application of the presented methods to the (Comprehensive) ICF Core Set for chronic widespread pain. The paper is a preprint of an article accepted for publication in the Journal of the Royal Statistical Society Series C (Applied Statistics). Please use the journal version for citation

    Penalized Regression with Ordinal Predictors

    Get PDF
    Ordered categorial predictors are a common case in regression modeling. In contrast to the case of ordinal response variables, ordinal predictors have been largely neglected in the literature. In this article penalized regression techniques are proposed. Based on dummy coding two types of penalization are explicitly developed; the first imposes a difference penalty, the second is a ridge type refitting procedure. A Bayesian motivation as well as alternative ways of derivation are provided. Simulation studies and real world data serve for illustration and to compare the approach to methods often seen in practice, namely linear regression on the group labels and pure dummy coding. The proposed regression techniques turn out to be highly competitive. On the basis of GLMs the concept is generalized to the case of non-normal outcomes by performing penalized likelihood estimation. The paper is a preprint of an article published in the International Statistical Review. Please use the journal version for citation

    Foreground separation using a flexible maximum-entropy algorithm: an application to COBE data

    Get PDF
    A flexible maximum-entropy component separation algorithm is presented that accommodates anisotropic noise, incomplete sky-coverage and uncertainties in the spectral parameters of foregrounds. The capabilities of the method are determined by first applying it to simulated spherical microwave data sets emulating the COBE-DMR, COBE-DIRBE and Haslam surveys. Using these simulations we find that is very difficult to determine unambiguously the spectral parameters of the galactic components for this data set due to their high level of noise. Nevertheless, we show that is possible to find a robust CMB reconstruction, especially at the high galactic latitude. The method is then applied to these real data sets to obtain reconstructions of the CMB component and galactic foreground emission over the whole sky. The best reconstructions are found for values of the spectral parameters: T_d=19 K, alpha_d=2, beta_ff=-0.19 and beta_syn=-0.8. The CMB map has been recovered with an estimated statistical error of \sim 22 muK on an angular scale of 7 degrees outside the galactic cut whereas the low galactic latitude region presents contamination from the foreground emissions.Comment: 29 pages, 25 figures, version accepted for publication in MNRAS. One subsection and 6 figures added. Main results unchange

    On the predictive content of nonlinear transformations of lagged autoregression residuals and time series observations

    Get PDF
    This study focuses on the question whether nonlinear transformation of lagged time series values and residuals are able to systematically improve the average forecasting performance of simple Autoregressive models. Furthermore it investigates the potential superior forecasting results of a nonlinear Threshold model. For this reason, a large-scale comparison over almost 400 time series which span from 1996:3 up to 2008:12 (production indices, price indices, unemployment rates, exchange rates, money supply) from 10 European countries is made. The average forecasting performance is appraised by means of Mean Group statistics and simple t-tests. Autoregressive models are extended by transformed first lags of residuals and time series values. Whereas additional transformation of lagged time series values are able to reduce the ex-ante forecast uncertainty and provide a better directional accuracy, transformations of lagged residuals also lead to smaller forecast errors. Furthermore, the nonlinear Threshold model is able to capture certain type of economic behavior in the data and provides superior forecasting results than a simple Autoregressive model. These findings are widely independent of considered economic variables. --Time series modeling,forecasting comparison,nonlinear transformations,Threshold Autoregressive modeling,average forecasting performance
    corecore