19,485 research outputs found

    A kepstrum approach to filtering, smoothing and prediction

    Get PDF
    The kepstrum (or complex cepstrum) method is revisited and applied to the problem of spectral factorization where the spectrum is directly estimated from observations. The solution to this problem in turn leads to a new approach to optimal filtering, smoothing and prediction using the Wiener theory. Unlike previous approaches to adaptive and self-tuning filtering, the technique, when implemented, does not require a priori information on the type or order of the signal generating model. And unlike other approaches - with the exception of spectral subtraction - no state-space or polynomial model is necessary. In this first paper results are restricted to stationary signal and additive white noise

    The velocity distribution of nearby stars from Hipparcos data I. The significance of the moving groups

    Full text link
    We present a three-dimensional reconstruction of the velocity distribution of nearby stars (<~ 100 pc) using a maximum likelihood density estimation technique applied to the two-dimensional tangential velocities of stars. The underlying distribution is modeled as a mixture of Gaussian components. The algorithm reconstructs the error-deconvolved distribution function, even when the individual stars have unique error and missing-data properties. We apply this technique to the tangential velocity measurements from a kinematically unbiased sample of 11,865 main sequence stars observed by the Hipparcos satellite. We explore various methods for validating the complexity of the resulting velocity distribution function, including criteria based on Bayesian model selection and how accurately our reconstruction predicts the radial velocities of a sample of stars from the Geneva-Copenhagen survey (GCS). Using this very conservative external validation test based on the GCS, we find that there is little evidence for structure in the distribution function beyond the moving groups established prior to the Hipparcos mission. This is in sharp contrast with internal tests performed here and in previous analyses, which point consistently to maximal structure in the velocity distribution. We quantify the information content of the radial velocity measurements and find that the mean amount of new information gained from a radial velocity measurement of a single star is significant. This argues for complementary radial velocity surveys to upcoming astrometric surveys

    A mean field method with correlations determined by linear response

    Full text link
    We introduce a new mean-field approximation based on the reconciliation of maximum entropy and linear response for correlations in the cluster variation method. Within a general formalism that includes previous mean-field methods, we derive formulas improving upon, e.g., the Bethe approximation and the Sessak-Monasson result at high temperature. Applying the method to direct and inverse Ising problems, we find improvements over standard implementations.Comment: 15 pages, 8 figures, 9 appendices, significant expansion on versions v1 and v

    On the Fundamental Limits of Random Non-orthogonal Multiple Access in Cellular Massive IoT

    Get PDF
    Machine-to-machine (M2M) constitutes the communication paradigm at the basis of Internet of Things (IoT) vision. M2M solutions allow billions of multi-role devices to communicate with each other or with the underlying data transport infrastructure without, or with minimal, human intervention. Current solutions for wireless transmissions originally designed for human-based applications thus require a substantial shift to cope with the capacity issues in managing a huge amount of M2M devices. In this paper, we consider the multiple access techniques as promising solutions to support a large number of devices in cellular systems with limited radio resources. We focus on non-orthogonal multiple access (NOMA) where, with the aim to increase the channel efficiency, the devices share the same radio resources for their data transmission. This has been shown to provide optimal throughput from an information theoretic point of view.We consider a realistic system model and characterise the system performance in terms of throughput and energy efficiency in a NOMA scenario with a random packet arrival model, where we also derive the stability condition for the system to guarantee the performance.Comment: To appear in IEEE JSAC Special Issue on Non-Orthogonal Multiple Access for 5G System

    Inference in particle tracking experiments by passing messages between images

    Full text link
    Methods to extract information from the tracking of mobile objects/particles have broad interest in biological and physical sciences. Techniques based on simple criteria of proximity in time-consecutive snapshots are useful to identify the trajectories of the particles. However, they become problematic as the motility and/or the density of the particles increases due to uncertainties on the trajectories that particles followed during the images' acquisition time. Here, we report an efficient method for learning parameters of the dynamics of the particles from their positions in time-consecutive images. Our algorithm belongs to the class of message-passing algorithms, known in computer science, information theory and statistical physics as Belief Propagation (BP). The algorithm is distributed, thus allowing parallel implementation suitable for computations on multiple machines without significant inter-machine overhead. We test our method on the model example of particle tracking in turbulent flows, which is particularly challenging due to the strong transport that those flows produce. Our numerical experiments show that the BP algorithm compares in quality with exact Markov Chain Monte-Carlo algorithms, yet BP is far superior in speed. We also suggest and analyze a random-distance model that provides theoretical justification for BP accuracy. Methods developed here systematically formulate the problem of particle tracking and provide fast and reliable tools for its extensive range of applications.Comment: 18 pages, 9 figure

    Throughput-based Design for Polar Coded-Modulation

    Full text link
    Typically, forward error correction (FEC) codes are designed based on the minimization of the error rate for a given code rate. However, for applications that incorporate hybrid automatic repeat request (HARQ) protocol and adaptive modulation and coding, the throughput is a more important performance metric than the error rate. Polar codes, a new class of FEC codes with simple rate matching, can be optimized efficiently for maximization of the throughput. In this paper, we aim to design HARQ schemes using multilevel polar coded-modulation (MLPCM). Thus, we first develop a method to determine a set-partitioning based bit-to-symbol mapping for high order QAM constellations. We simplify the LLR estimation of set-partitioned QAM constellations for a multistage decoder, and we introduce a set of algorithms to design throughput-maximizing MLPCM for the successive cancellation decoding (SCD). These codes are specifically useful for non-combining (NC) and Chase-combining (CC) HARQ protocols. Furthermore, since optimized codes for SCD are not optimal for SC list decoders (SCLD), we propose a rate matching algorithm to find the best rate for SCLD while using the polar codes optimized for SCD. The resulting codes provide throughput close to the capacity with low decoding complexity when used with NC or CC HARQ
    • …
    corecore