3,789 research outputs found

    Optimization of Planck/LFI on--board data handling

    Get PDF
    To asses stability against 1/f noise, the Low Frequency Instrument (LFI) onboard the Planck mission will acquire data at a rate much higher than the data rate allowed by its telemetry bandwith of 35.5 kbps. The data are processed by an onboard pipeline, followed onground by a reversing step. This paper illustrates the LFI scientific onboard processing to fit the allowed datarate. This is a lossy process tuned by using a set of 5 parameters Naver, r1, r2, q, O for each of the 44 LFI detectors. The paper quantifies the level of distortion introduced by the onboard processing, EpsilonQ, as a function of these parameters. It describes the method of optimizing the onboard processing chain. The tuning procedure is based on a optimization algorithm applied to unprocessed and uncompressed raw data provided either by simulations, prelaunch tests or data taken from LFI operating in diagnostic mode. All the needed optimization steps are performed by an automated tool, OCA2, which ends with optimized parameters and produces a set of statistical indicators, among them the compression rate Cr and EpsilonQ. For Planck/LFI the requirements are Cr = 2.4 and EpsilonQ <= 10% of the rms of the instrumental white noise. To speedup the process an analytical model is developed that is able to extract most of the relevant information on EpsilonQ and Cr as a function of the signal statistics and the processing parameters. This model will be of interest for the instrument data analysis. The method was applied during ground tests when the instrument was operating in conditions representative of flight. Optimized parameters were obtained and the performance has been verified, the required data rate of 35.5 Kbps has been achieved while keeping EpsilonQ at a level of 3.8% of white noise rms well within the requirements.Comment: 51 pages, 13 fig.s, 3 tables, pdflatex, needs JINST.csl, graphicx, txfonts, rotating; Issue 1.0 10 nov 2009; Sub. to JINST 23Jun09, Accepted 10Nov09, Pub.: 29Dec09; This is a preprint, not the final versio

    Combining Undersampled Dithered Images

    Get PDF
    Undersampled images, such as those produced by the HST WFPC-2, misrepresent fine-scale structure intrinsic to the astronomical sources being imaged. Analyzing such images is difficult on scales close to their resolution limits and may produce erroneous results. A set of ``dithered'' images of an astronomical source generally contains more information about its structure than any single undersampled image, however, and may permit reconstruction of a ``superimage'' with Nyquist sampling. I present a tutorial on a method of image reconstruction that builds a superimage from a complex linear combination of the Fourier transforms of a set of undersampled dithered images. This method works by algebraically eliminating the high order satellites in the periodic transforms of the aliased images. The reconstructed image is an exact representation of the data-set with no loss of resolution at the Nyquist scale. The algorithm is directly derived from the theoretical properties of aliased images and involves no arbitrary parameters, requiring only that the dithers are purely translational and constant in pixel-space over the domain of the object of interest. I show examples of its application to WFC and PC images. I argue for its use when the best recovery of point sources or morphological information at the HST diffraction limit is of interest.Comment: 22 pages, 9 EPS figures, submitted to PAS

    HST Photometry and Keck Spectroscopy of the Rich Cluster MS1054-03: Morphologies, Butcher-Oemler Effect and the Color-Magnitude Relation at z=0.83

    Get PDF
    We present a study of 81 I selected, spectroscopically-confirmed members of the X-ray cluster MS1054-03 at z=0.83. Redshifts and spectral types were determined from Keck spectroscopy. Morphologies and accurate colors were determined from a large mosaic of HST WFPC2 images in F606W and F814W. Early-type galaxies constitute only 44% of this galaxy population. Thirty-nine percent are spiral galaxies, and 17% are mergers. The early-type galaxies follow a tight and well-defined color-magnitude relation, with the exception of a few outliers. The observed scatter is 0.029 +- 0.005 magnitudes in restframe U-B. Most of the mergers lie close to the CM relation defined by the early-type galaxies. They are bluer by only 0.07 +- 0.02 magnitudes, and the scatter in their colors is 0.07 +- 0.04 magnitudes. Spiral galaxies in MS1054-03 exhibit a large range in their colors. The bluest spiral galaxies are 0.7 magnitudes bluer than the early-type galaxies, but the majority is within +- 0.2 magnitudes of the early-type galaxy sequence. The red colors of the mergers and the majority of the spiral galaxies are reflected in the fairly low Butcher-Oemler blue fraction of MS1054-03: f_B=0.22 +- 0.05. The slope and scatter of the CM relation of early-type galaxies are roughly constant with redshift, confirming previous studies that were based on ground-based color measurements and very limited membership information. However, the scatter in the combined sample of early-type galaxies and mergers is twice as high as the scatter of the early-type galaxies alone. This is a direct demonstration of the ``progenitor bias'': high redshift early-type galaxies seem to form a homogeneous, old population because the progenitors of the youngest present-day early-type galaxies are not included in the sample.Comment: Accepted for publication in the ApJ. At http://astro.caltech.edu/~pgd/cm1054/ color figures can be obtaine

    Hubble Space Telescope weak lensing study of the z=0.83 cluster MS 1054-03

    Get PDF
    We have measured the weak gravitational lensing signal of MS 1054-03, a rich and X-ray luminous cluster of galaxies at a redshift of z=0.83, using a two-colour mosaic of deep WFPC2 images. The small corrections for the size of the PSF and the high number density of background galaxies obtained in these observations result in an accurate and well calibrated measurement of the lensing induced distortion. The strength of the lensing signal depends on the redshift distribution of the background galaxies. We used photometric redshift distributions from the Northern and Southern Hubble Deep Fields to relate the lensing signal to the mass. The predicted variations of the signal as a function of apparent source magnitude and colour agrees well with the observed lensing signal. We determine a mass of (1.2+-0.2)x10^15 Msun within an aperture of radius 1 Mpc. Under the assumption of an isothermal mass distribution, the corresponding velocity dispersion is 1311^{+83}_{-89} km/s. For the mass-to-light ratio we find 269+-37 Msun/Lsun. The errors in the mass and mass-to-light ratio include the contribution from the random intrinsic ellipticities of the source galaxies, but not the (systematic) error due to the uncertainty in the redshift distribution. However, the estimates for the mass and mass-to-light ratio of MS 1054-03 agree well with other estimators, suggesting that the mass calibration works well. The reconstruction of the projected mass surface density shows a complex mass distribution, consistent with the light distribution. The results indicate that MS 1054-03 is a young system. The timescale for relaxation is estimated to be at least 1 Gyr. Averaging the tangential shear around the cluster galaxies, we find that the velocity dispersion of an Lstar galaxy is 203+-33 km/s.Comment: 21 pages, Latex, with 27 figures (3 figures bitmapped), ApJ, in press. Version (with non-bitmapped figures) available at http://www.astro.rug.nl/~hoekstra/papers.htm

    The lens and source of the optical Einstein ring gravitational lens ER 0047-2808

    Full text link
    (Abridged) We perform a detailed analysis of the optical gravitational lens ER 0047-2808 imaged with WFPC2 on the Hubble Space Telescope. Using software specifically designed for the analysis of resolved gravitational lens systems, we focus on how the image alone can constrain the mass distribution in the lens galaxy. We find the data are of sufficient quality to strongly constrain the lens model with no a priori assumptions about the source. Using a variety of mass models, we find statistically acceptable results for elliptical isothermal-like models with an Einstein radius of 1.17''. An elliptical power-law model (Sigma \propto R^-beta) for the surface mass density favours a slope slightly steeper than isothermal with beta = 1.08 +/- 0.03. Other models including a constant M/L, pure NFW halo and (surprisingly) an isothermal sphere with external shear are ruled out by the data. We find the galaxy light profile can only be fit with a Sersic plus point source model. The resulting total M/L_B contained within the images is 4.7 h_65 +/-0.3. In addition, we find the luminous matter is aligned with the total mass distribution within a few degrees. The source, reconstructed by the software, is revealed to have two bright regions, with an unresolved component inside the caustic and a resolved component straddling a fold caustic. The angular size of the entire source is approx. 0.1'' and its (unlensed) Lyman-alpha flux is 3 x 10^-17 erg/s/cm^2.Comment: 13 pages, 5 figures. Revised version accepted for publication in MNRA

    A novel method for subjective picture quality assessment and further studies of HDTV formats

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ IEEE 2008.This paper proposes a novel method for the assessment of picture quality, called triple stimulus continuous evaluation scale (TSCES), to allow the direct comparison of different HDTV formats. The method uses an upper picture quality anchor and a lower picture quality anchor with defined impairments. The HDTV format under test is evaluated in a subjective comparison with the upper and lower anchors. The method utilizes three displays in a particular vertical arrangement. In an initial series of tests with the novel method, the HDTV formats 1080p/50,1080i/25, and 720p/50 were compared at various bit-rates and with seven different content types on three identical 1920 times 1080 pixel displays. It was found that the new method provided stable and consistent results. The method was tested with 1080p/50,1080i/25, and 720p/50 HDTV images that had been coded with H.264/AVC High profile. The result of the assessment was that the progressive HDTV formats found higher appreciation by the assessors than the interlaced HDTV format. A system chain proposal is given for future media production and delivery to take advantage of this outcome. Recommendations for future research conclude the paper

    Design of a digital compression technique for shuttle television

    Get PDF
    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power
    • …
    corecore