4,282 research outputs found

    The theoretical limits of source and channel coding

    Get PDF
    The theoretical relationship among signal power, distortion, and bandwidth for several source and channel models is presented. The work is intended as a reference for the evaluation of the performance of specific data compression algorithms

    Sparse-Based Estimation Performance for Partially Known Overcomplete Large-Systems

    Get PDF
    We assume the direct sum o for the signal subspace. As a result of post- measurement, a number of operational contexts presuppose the a priori knowledge of the LB -dimensional "interfering" subspace and the goal is to estimate the LA am- plitudes corresponding to subspace . Taking into account the knowledge of the orthogonal "interfering" subspace \perp, the Bayesian estimation lower bound is de- rivedfortheLA-sparsevectorinthedoublyasymptoticscenario,i.e. N,LA,LB -> \infty with a finite asymptotic ratio. By jointly exploiting the Compressed Sensing (CS) and the Random Matrix Theory (RMT) frameworks, closed-form expressions for the lower bound on the estimation of the non-zero entries of a sparse vector of interest are derived and studied. The derived closed-form expressions enjoy several interesting features: (i) a simple interpretable expression, (ii) a very low computational cost especially in the doubly asymptotic scenario, (iii) an accurate prediction of the mean-square-error (MSE) of popular sparse-based estimators and (iv) the lower bound remains true for any amplitudes vector priors. Finally, several idealized scenarios are compared to the derived bound for a common output signal-to-noise-ratio (SNR) which shows the in- terest of the joint estimation/rejection methodology derived herein.Comment: 10 pages, 5 figures, Journal of Signal Processin

    Some Recent Advances in Measurement Error Models and Methods

    Get PDF
    A measurement error model is a regression model with (substantial) measurement errors in the variables. Disregarding these measurement errors in estimating the regression parameters results in asymptotically biased estimators. Several methods have been proposed to eliminate, or at least to reduce, this bias, and the relative efficiency and robustness of these methods have been compared. The paper gives an account of these endeavors. In another context, when data are of a categorical nature, classification errors play a similar role as measurement errors in continuous data. The paper also reviews some recent advances in this field

    Bibliometric indicators: the origin of their log-normal distribution and why they are not a reliable proxy for an individual scholar’s talent

    Get PDF
    There is now compelling evidence that the statistical distributions of extensive individual bibliometric indicators collected by a scholar, such as the number of publications or the total number of citations, are well represented by a Log-Normal function when homogeneous samples are considered. A Log-Normal distribution function is the normal distribution for the logarithm of the variable. In linear scale it is a highly skewed distribution with a long tail in the high productivity side. We are still lacking a detailed and convincing ab-initio model able to explain observed Log-Normal distributions-this is the gap this paper sets out to fill. Here, we propose a general explanation of the observed evidence by developing a straightforward model based on the following simple assumptions: (1) the materialist principle of the natural equality of human intelligence, (2) the success breeds success effect, also known as Merton effect, which can be traced back to the Gospel parables about the Talents (Matthew) and Minas (Luke), and, (3) the recognition and reputation mechanism. Building on these assumptions we propose a distribution function that, although mathematically not identical to a Log-Normal distribution, shares with it all its main features. Our model well reproduces the empirical distributions, so the hypotheses at the basis of the model are not falsified. Therefore the distributions of the bibliometric parameters observed might be the result of chance and noise (chaos) related to multiplicative phenomena connected to a publish or perish inflationary mechanism, led by scholars' recognition and reputations. In short, being a scholar in the right tail or in the left tail of the distribution could have very little connection to her/his merit and achievements. This interpretation might cast some doubts on the use of the number of papers and/or citations as a measure of scientific achievements. A tricky issue seems to emerge, that is: what then do bibliometric indicators really measure? This issue calls for deeper investigations into the meaning of bibliometric indicators. This is an interesting and intriguing topic for further research to be carried out within a wider interdisciplinary investigation of the science of science, which may include elements and investigation tools from philosophy, psychology and sociology

    Unconfused Ultraconservative Multiclass Algorithms

    Full text link
    We tackle the problem of learning linear classifiers from noisy datasets in a multiclass setting. The two-class version of this problem was studied a few years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these contributions, the proposed approaches to fight the noise revolve around a Perceptron learning scheme fed with peculiar examples computed through a weighted average of points from the noisy training set. We propose to build upon these approaches and we introduce a new algorithm called UMA (for Unconfused Multiclass additive Algorithm) which may be seen as a generalization to the multiclass setting of the previous approaches. In order to characterize the noise we use the confusion matrix as a multiclass extension of the classification noise studied in the aforementioned literature. Theoretically well-founded, UMA furthermore displays very good empirical noise robustness, as evidenced by numerical simulations conducted on both synthetic and real data. Keywords: Multiclass classification, Perceptron, Noisy labels, Confusion MatrixComment: ACML, Australia (2013
    corecore