23,515 research outputs found

    A Sunyaev-Zel'Dovich-Selected Sample of the Most Massive Galaxy Clusters in the 2500 deg^2 South Pole Telescope Survey

    Get PDF
    The South Pole Telescope (SPT) is currently surveying 2500 deg^2 of the southern sky to detect massive galaxy clusters out to the epoch of their formation using the Sunyaev-Zel'dovich (SZ) effect. This paper presents a catalog of the 26 most significant SZ cluster detections in the full survey region. The catalog includes 14 clusters which have been previously identified and 12 that are new discoveries. These clusters were identified in fields observed to two differing noise depths: 1500 deg^2 at the final SPT survey depth of 18 μK arcmin at 150 GHz and 1000 deg^2 at a depth of 54 μK arcmin. Clusters were selected on the basis of their SZ signal-to-noise ratio (S/N) in SPT maps, a quantity which has been demonstrated to correlate tightly with cluster mass. The S/N thresholds were chosen to achieve a comparable mass selection across survey fields of both depths. Cluster redshifts were obtained with optical and infrared imaging and spectroscopy from a variety of ground- and space-based facilities. The redshifts range from 0.098 ≤ z ≤ 1.132 with a median of z_(med) = 0.40. The measured SZ S/N and redshifts lead to unbiased mass estimates ranging from 9.8 × 10^(14) M_☉ h^(–1)_(70) ≤ M _(200(ρmean)) ≤ 3.1 × 10^(15) M_☉ h^(–1)_(70). Based on the SZ mass estimates, we find that none of the clusters are individually in significant tension with the ΛCDM cosmological model. We also test for evidence of non-Gaussianity based on the cluster sample and find the data show no preference for non-Gaussian perturbations

    The water-cryogen heat exchanger

    Get PDF
    Heat exchanger, using water as heat medium, converts liquid hydrogen to gaseous hydrogen at a very high rate. Possible applications include treatment of liquified natural gas in cities to bring the gas on-line quickly, conversion of liquid oxygen and liquid nitrogen for steel mills, and high volume inert purging

    Variable-Length Coding with Feedback: Finite-Length Codewords and Periodic Decoding

    Full text link
    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of single-user memoryless channels. Recently Polyanskiy et al. studied the benefit of variable-length feedback with termination (VLFT) codes in the non-asymptotic regime. In that work, achievability is based on an infinite length random code and decoding is attempted at every symbol. The coding rate backoff from capacity due to channel dispersion is greatly reduced with feedback, allowing capacity to be approached with surprisingly small expected latency. This paper is mainly concerned with VLFT codes based on finite-length codes and decoding attempts only at certain specified decoding times. The penalties of using a finite block-length NN and a sequence of specified decoding times are studied. This paper shows that properly scaling NN with the expected latency can achieve the same performance up to constant terms as with N=∞N = \infty. The penalty introduced by periodic decoding times is a linear term of the interval between decoding times and hence the performance approaches capacity as the expected latency grows if the interval between decoding times grows sub-linearly with the expected latency.Comment: 8 pages. A shorten version is submitted to ISIT 201

    A Rate-Compatible Sphere-Packing Analysis of Feedback Coding with Limited Retransmissions

    Full text link
    Recent work by Polyanskiy et al. and Chen et al. has excited new interest in using feedback to approach capacity with low latency. Polyanskiy showed that feedback identifying the first symbol at which decoding is successful allows capacity to be approached with surprisingly low latency. This paper uses Chen's rate-compatible sphere-packing (RCSP) analysis to study what happens when symbols must be transmitted in packets, as with a traditional hybrid ARQ system, and limited to relatively few (six or fewer) incremental transmissions. Numerical optimizations find the series of progressively growing cumulative block lengths that enable RCSP to approach capacity with the minimum possible latency. RCSP analysis shows that five incremental transmissions are sufficient to achieve 92% of capacity with an average block length of fewer than 101 symbols on the AWGN channel with SNR of 2.0 dB. The RCSP analysis provides a decoding error trajectory that specifies the decoding error rate for each cumulative block length. Though RCSP is an idealization, an example tail-biting convolutional code matches the RCSP decoding error trajectory and achieves 91% of capacity with an average block length of 102 symbols on the AWGN channel with SNR of 2.0 dB. We also show how RCSP analysis can be used in cases where packets have deadlines associated with them (leading to an outage probability).Comment: To be published at the 2012 IEEE International Symposium on Information Theory, Cambridge, MA, USA. Updated to incorporate reviewers' comments and add new figure

    Increasing Flash Memory Lifetime by Dynamic Voltage Allocation for Constant Mutual Information

    Full text link
    The read channel in Flash memory systems degrades over time because the Fowler-Nordheim tunneling used to apply charge to the floating gate eventually compromises the integrity of the cell because of tunnel oxide degradation. While degradation is commonly measured in the number of program/erase cycles experienced by a cell, the degradation is proportional to the number of electrons forced into the floating gate and later released by the erasing process. By managing the amount of charge written to the floating gate to maintain a constant read-channel mutual information, Flash lifetime can be extended. This paper proposes an overall system approach based on information theory to extend the lifetime of a flash memory device. Using the instantaneous storage capacity of a noisy flash memory channel, our approach allocates the read voltage of flash cell dynamically as it wears out gradually over time. A practical estimation of the instantaneous capacity is also proposed based on soft information via multiple reads of the memory cells.Comment: 5 pages. 5 figure

    On gravity from SST, geoid from Seasat, and plate age and fracture zones in the Pacific

    Get PDF
    A composite map produced by combining 90 passes of SST data show good agreement with conventional GEM models. The SEASAT altimeter data were deduced and found to agree with both the SST and GEM fields. The maps are dominated (especially in the east) by a pattern of roughly east-west anomalies with a transverse wavelength of about 2000 km. Comparison with regional bathymetric data shows a remarkedly close correlation with plate age. Most anomalies in the east half of the Pacific could be partly caused by regional differences in plate age. The amplitude of these geoid or gravity anomalies caused by age differences should decrease with absolute plate age, and large anomalies (approximately 3 m) over old, smooth sea floor may indicate a further deeper source within or perhaps below the lithosphere. The possible plume size and ascent velocity necessary to supply deep mantle material to the upper mantle without complete thermal equilibration was considered. A plume emanating from a buoyant layer 100 km thick and 10,000 times less viscous than the surrounding mantle should have a diameter of about 400 km and must ascend at about 10 cm/yr to arrive still anomalously hot in the uppermost mantle

    Feedback Communication Systems with Limitations on Incremental Redundancy

    Full text link
    This paper explores feedback systems using incremental redundancy (IR) with noiseless transmitter confirmation (NTC). For IR-NTC systems based on {\em finite-length} codes (with blocklength NN) and decoding attempts only at {\em certain specified decoding times}, this paper presents the asymptotic expansion achieved by random coding, provides rate-compatible sphere-packing (RCSP) performance approximations, and presents simulation results of tail-biting convolutional codes. The information-theoretic analysis shows that values of NN relatively close to the expected latency yield the same random-coding achievability expansion as with N=∞N = \infty. However, the penalty introduced in the expansion by limiting decoding times is linear in the interval between decoding times. For binary symmetric channels, the RCSP approximation provides an efficiently-computed approximation of performance that shows excellent agreement with a family of rate-compatible, tail-biting convolutional codes in the short-latency regime. For the additive white Gaussian noise channel, bounded-distance decoding simplifies the computation of the marginal RCSP approximation and produces similar results as analysis based on maximum-likelihood decoding for latencies greater than 200. The efficiency of the marginal RCSP approximation facilitates optimization of the lengths of incremental transmissions when the number of incremental transmissions is constrained to be small or the length of the incremental transmissions is constrained to be uniform after the first transmission. Finally, an RCSP-based decoding error trajectory is introduced that provides target error rates for the design of rate-compatible code families for use in feedback communication systems.Comment: 23 pages, 15 figure

    A CLEAN-based Method for Deconvolving Interstellar Pulse Broadening from Radio Pulses

    Get PDF
    Multipath propagation in the interstellar medium distorts radio pulses, an effect predominant for distant pulsars observed at low frequencies. Typically, broadened pulses are analyzed to determine the amount of propagation-induced pulse broadening, but with little interest in determining the undistorted pulse shapes. In this paper we develop and apply a method that recovers both the intrinsic pulse shape and the pulse broadening function that describes the scattering of an impulse. The method resembles the CLEAN algorithm used in synthesis imaging applications, although we search for the best pulse broadening function, and perform a true deconvolution to recover intrinsic pulse structre. As figures of merit to optimize the deconvolution, we use the positivity and symmetry of the deconvolved result along with the mean square residual and the number of points below a given threshold. Our method makes no prior assumptions about the intrinsic pulse shape and can be used for a range of scattering functions for the interstellar medium. It can therefore be applied to a wider variety of measured pulse shapes and degrees of scattering than the previous approaches. We apply the technique to both simulated data and data from Arecibo observations.Comment: 9 pages, 6 figures, Accepted for publication in the Astrophysical Journa

    Observed tidal braking in the earth/moon/sun system

    Get PDF
    The low degree and order terms in the spherical harmonic model of the tidal potential were observed through the perturbations which are induced on near-earth satellite orbital motions. Evaluations of tracking observations from 17 satellites and a GEM-T1 geopotential model were used in the tidal recovery which was made in the presence of over 600 long-wavelength coefficients from 32 major and minor tides. Wahr's earth tidal model was used as a basis for the recovery of the ocean tidal terms. Using this tidal model, the secular change in the moon's mean motion due to tidal dissipation was found to be -25.27 + or - 0.61 arcsec/century squared. The estimation of lunar acceleration agreed with that observed from lunar laser ranging techniques (-24.9 + or - 1.0 arcsec/century squared), with the corresponding tidal braking of earth's rotation being -5.98 + or - 0.22 x 10 to the minus 22 rad/second squared. If the nontidal braking of the earth due to the observed secular change in the earth's second zonal harmonic is considered, satellite techniques yield a total value of the secular change of the earth's rotation rate of -4.69 + or - 0.36 x 10 to the minus 22 rad/second squared

    Estimating Small Area Income Deprivation: An Iterative Proportional Fitting Approach

    Get PDF
    Small area estimation and in particular the estimation of small area income deprivation has potential value in the development of new or alternative components of multiple deprivation indices. These new approaches enable the development of income distribution threshold based as opposed to benefit count based measures of income deprivation and so enable the alignment of regional and national measures such as the Households Below Average Income with small area measures. This paper briefly reviews a number of approaches to small area estimation before describing in some detail an iterative proportional fitting based spatial microsimulation approach. This approach is then applied to the estimation of small area HBAI rates at the small area level in Wales in 2003-5. The paper discusses the results of this approach, contrasts them with contemporary ‘official’ income deprivation measures for the same areas and describes a range of ways to assess the robustness of the results
    • …
    corecore