117,073 research outputs found
Trellis-Based Equalization for Sparse ISI Channels Revisited
Sparse intersymbol-interference (ISI) channels are encountered in a variety
of high-data-rate communication systems. Such channels have a large channel
memory length, but only a small number of significant channel coefficients. In
this paper, trellis-based equalization of sparse ISI channels is revisited. Due
to the large channel memory length, the complexity of maximum-likelihood
detection, e.g., by means of the Viterbi algorithm (VA), is normally
prohibitive. In the first part of the paper, a unified framework based on
factor graphs is presented for complexity reduction without loss of optimality.
In this new context, two known reduced-complexity algorithms for sparse ISI
channels are recapitulated: The multi-trellis VA (M-VA) and the
parallel-trellis VA (P-VA). It is shown that the M-VA, although claimed, does
not lead to a reduced computational complexity. The P-VA, on the other hand,
leads to a significant complexity reduction, but can only be applied for a
certain class of sparse channels. In the second part of the paper, a unified
approach is investigated to tackle general sparse channels: It is shown that
the use of a linear filter at the receiver renders the application of standard
reduced-state trellis-based equalizer algorithms feasible, without significant
loss of optimality. Numerical results verify the efficiency of the proposed
receiver structure.Comment: To be presented at the 2005 IEEE Int. Symp. Inform. Theory (ISIT
2005), September 4-9, 2005, Adelaide, Australi
Harold Jeffreys's Theory of Probability Revisited
Published exactly seventy years ago, Jeffreys's Theory of Probability (1939)
has had a unique impact on the Bayesian community and is now considered to be
one of the main classics in Bayesian Statistics as well as the initiator of the
objective Bayes school. In particular, its advances on the derivation of
noninformative priors as well as on the scaling of Bayes factors have had a
lasting impact on the field. However, the book reflects the characteristics of
the time, especially in terms of mathematical rigor. In this paper we point out
the fundamental aspects of this reference work, especially the thorough
coverage of testing problems and the construction of both estimation and
testing noninformative priors based on functional divergences. Our major aim
here is to help modern readers in navigating in this difficult text and in
concentrating on passages that are still relevant today.Comment: This paper commented in: [arXiv:1001.2967], [arXiv:1001.2968],
[arXiv:1001.2970], [arXiv:1001.2975], [arXiv:1001.2985], [arXiv:1001.3073].
Rejoinder in [arXiv:0909.1008]. Published in at
http://dx.doi.org/10.1214/09-STS284 the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
The Frost Multidimensional Perfectionism Scale revisited: More perfect with four (instead of six) dimensions
The Frost Multidimensional Perfectionism Scale (FMPS; Frost, Marten, Lahart & Rosenblate, 1990) provides six subscales for a multidimensional assessment of perfectionism: Concern over Mistakes (CM), Personal Standards (PS), Parental Expectations (PE), Parental Criticism (PC), Doubts about actions (D), and Organization (O). Despite its increasing popularity in personality and clinical research, the FMPS has also drawn some criticism for its factorial instability across samples. The present article argues that this instability may be due to an overextraction of components. Whereas all previous analyses presented six-factor solutions for the FMPS items, a reanalysis with Horn's parallel analysis suggested only four or five underlying factors. To investigate the nature of these factors, item responses from N = 243 participants were subjected to principal component analysis. Again, parallel analysis retained only four components. Varimax rotation replicated PS and O as separate factors, whereas combining CM with D as well as PE with PC. Consequently, the present article suggests a reduction to four (instead of six) FMPS subscales. Differential correlations with anxiety, depression, parental representations and action tendencies underscore the advantage of this solution
MDL Denoising Revisited
We refine and extend an earlier MDL denoising criterion for wavelet-based
denoising. We start by showing that the denoising problem can be reformulated
as a clustering problem, where the goal is to obtain separate clusters for
informative and non-informative wavelet coefficients, respectively. This
suggests two refinements, adding a code-length for the model index, and
extending the model in order to account for subband-dependent coefficient
distributions. A third refinement is derivation of soft thresholding inspired
by predictive universal coding with weighted mixtures. We propose a practical
method incorporating all three refinements, which is shown to achieve good
performance and robustness in denoising both artificial and natural signals.Comment: Submitted to IEEE Transactions on Information Theory, June 200
Distortion Metrics of Composite Channels with Receiver Side Information
We consider transmission of stationary ergodic sources over non-ergodic composite channels with channel state information at the receiver (CSIR). Previously we introduced alternative capacity definitions to Shannon capacity, including outage and expected capacity. These generalized definitions relax the constraint of Shannon capacity that all transmitted information must be decoded at the receiver. In this work alternative end- to-end distortion metrics such as outage and expected distortion are introduced to relax the constraint that a single distortion level has to be maintained for all channel states. Through the example of transmission of a Gaussian source over a slow-fading Gaussian channel, we illustrate that the end-to-end distortion metrics dictate whether the source and channel coding can be separated for a communication system. We also show that the source and channel need to exchange information through an appropriate interface to facilitate separate encoding and decoding
Gaussianity revisited: Exploring the Kibble-Zurek mechanism with superconducting rings
In this paper we use spontaneous flux production in annular superconductors
to shed light on the Kibble-Zurek scenario. In particular, we examine the
effects of finite size and external fields, neither of which is directly
amenable to the KZ analysis. Supported by 1D and 3D simulations, the properties
of a superconducting ring are seen to be well represented by analytic Gaussian
approximations which encode the KZ scales indirectly. Experimental results for
annuli in the presence of external fields corroborate these findings.Comment: 20 pages, 10 figures; submitted to J. Phys: Condens. Matter for the
special issue 'Condensed Matter Analogues of Cosmology'; v2: considerably
reduced length, incorporation of experimental details into main text,
discussion improved, references added, version accepted for publicatio
- …