13,988 research outputs found

    Polynomial spline-approximation of Clarke's model

    Get PDF
    We investigate polynomial spline approximation of stationary random processes on a uniform grid applied to Clarke's model of time variations of path amplitudes in multipath fading channels with Doppler scattering. The integral mean square error (MSE) for optimal and interpolation splines is presented as a series of spectral moments. The optimal splines outperform the interpolation splines; however, as the sampling factor increases, the optimal and interpolation splines of even order tend to provide the same accuracy. To build such splines, the process to be approximated needs to be known for all time, which is impractical. Local splines, on the other hand, may be used where the process is known only over a finite interval. We first consider local splines with quasioptimal spline coefficients. Then, we derive optimal spline coefficients and investigate the error for different sets of samples used for calculating the spline coefficients. In practice, approximation with a low processing delay is of interest; we investigate local spline extrapolation with a zero-processing delay. The results of our investigation show that local spline approximation is attractive for implementation from viewpoints of both low processing delay and small approximation error; the error can be very close to the minimum error provided by optimal splines. Thus, local splines can be effectively used for channel estimation in multipath fast fading channels

    Compressive Estimation of a Stochastic Process with Unknown Autocorrelation Function

    Full text link
    In this paper, we study the prediction of a circularly symmetric zero-mean stationary Gaussian process from a window of observations consisting of finitely many samples. This is a prevalent problem in a wide range of applications in communication theory and signal processing. Due to stationarity, when the autocorrelation function or equivalently the power spectral density (PSD) of the process is available, the Minimum Mean Squared Error (MMSE) predictor is readily obtained. In particular, it is given by a linear operator that depends on autocorrelation of the process as well as the noise power in the observed samples. The prediction becomes, however, quite challenging when the PSD of the process is unknown. In this paper, we propose a blind predictor that does not require the a priori knowledge of the PSD of the process and compare its performance with that of an MMSE predictor that has a full knowledge of the PSD. To design such a blind predictor, we use the random spectral representation of a stationary Gaussian process. We apply the well-known atomic-norm minimization technique to the observed samples to obtain a discrete quantization of the underlying random spectrum, which we use to predict the process. Our simulation results show that this estimator has a good performance comparable with that of the MMSE estimator.Comment: 6 pages, 4 figures. Accepted for presentation in ISIT 2017, Aachen, German

    Phase Transitions of the Typical Algorithmic Complexity of the Random Satisfiability Problem Studied with Linear Programming

    Full text link
    Here we study the NP-complete KK-SAT problem. Although the worst-case complexity of NP-complete problems is conjectured to be exponential, there exist parametrized random ensembles of problems where solutions can typically be found in polynomial time for suitable ranges of the parameter. In fact, random KK-SAT, with α=M/N\alpha=M/N as control parameter, can be solved quickly for small enough values of α\alpha. It shows a phase transition between a satisfiable phase and an unsatisfiable phase. For branch and bound algorithms, which operate in the space of feasible Boolean configurations, the empirically hardest problems are located only close to this phase transition. Here we study KK-SAT (K=3,4K=3,4) and the related optimization problem MAX-SAT by a linear programming approach, which is widely used for practical problems and allows for polynomial run time. In contrast to branch and bound it operates outside the space of feasible configurations. On the other hand, finding a solution within polynomial time is not guaranteed. We investigated several variants like including artificial objective functions, so called cutting-plane approaches, and a mapping to the NP-complete vertex-cover problem. We observed several easy-hard transitions, from where the problems are typically solvable (in polynomial time) using the given algorithms, respectively, to where they are not solvable in polynomial time. For the related vertex-cover problem on random graphs these easy-hard transitions can be identified with structural properties of the graphs, like percolation transitions. For the present random KK-SAT problem we have investigated numerous structural properties also exhibiting clear transitions, but they appear not be correlated to the here observed easy-hard transitions. This renders the behaviour of random KK-SAT more complex than, e.g., the vertex-cover problem.Comment: 11 pages, 5 figure

    Model-based asymptotically optimal dispersion measure correction for pulsar timing

    Full text link
    In order to reach the sensitivity required to detect gravitational waves, pulsar timing array experiments need to mitigate as much noise as possible in timing data. A dominant amount of noise is likely due to variations in the dispersion measure. To correct for such variations, we develop a statistical method inspired by the maximum likelihood estimator and optimal filtering. Our method consists of two major steps. First, the spectral index and amplitude of dispersion measure variations are measured via a time-domain spectral analysis. Second, the linear optimal filter is constructed based on the model parameters found in the first step, and is used to extract the dispersion measure variation waveforms. Compared to current existing methods, this method has better time resolution for the study of short timescale dispersion variations, and generally produces smaller errors in waveform estimations. This method can process irregularly sampled data without any interpolation because of its time-domain nature. Furthermore, it offers the possibility to interpolate or extrapolate the waveform estimation to regions where no data is available. Examples using simulated data sets are included for demonstration.Comment: 15 pages, 15 figures, submitted 15th Sept. 2013, accepted 2nd April 2014 by MNRAS. MNRAS, 201

    Monte Carlo algorithms are very effective in finding the largest independent set in sparse random graphs

    Full text link
    The effectiveness of stochastic algorithms based on Monte Carlo dynamics in solving hard optimization problems is mostly unknown. Beyond the basic statement that at a dynamical phase transition the ergodicity breaks and a Monte Carlo dynamics cannot sample correctly the probability distribution in times linear in the system size, there are almost no predictions nor intuitions on the behavior of this class of stochastic dynamics. The situation is particularly intricate because, when using a Monte Carlo based algorithm as an optimization algorithm, one is usually interested in the out of equilibrium behavior which is very hard to analyse. Here we focus on the use of Parallel Tempering in the search for the largest independent set in a sparse random graph, showing that it can find solutions well beyond the dynamical threshold. Comparison with state-of-the-art message passing algorithms reveals that parallel tempering is definitely the algorithm performing best, although a theory explaining its behavior is still lacking.Comment: 14 pages, 12 figure

    Biased landscapes for random Constraint Satisfaction Problems

    Full text link
    The typical complexity of Constraint Satisfaction Problems (CSPs) can be investigated by means of random ensembles of instances. The latter exhibit many threshold phenomena besides their satisfiability phase transition, in particular a clustering or dynamic phase transition (related to the tree reconstruction problem) at which their typical solutions shatter into disconnected components. In this paper we study the evolution of this phenomenon under a bias that breaks the uniformity among solutions of one CSP instance, concentrating on the bicoloring of k-uniform random hypergraphs. We show that for small k the clustering transition can be delayed in this way to higher density of constraints, and that this strategy has a positive impact on the performances of Simulated Annealing algorithms. We characterize the modest gain that can be expected in the large k limit from the simple implementation of the biasing idea studied here. This paper contains also a contribution of a more methodological nature, made of a review and extension of the methods to determine numerically the discontinuous dynamic transition threshold.Comment: 32 pages, 16 figure

    Multilevel Richardson-Romberg extrapolation

    Get PDF
    We propose and analyze a Multilevel Richardson-Romberg (MLRR) estimator which combines the higher order bias cancellation of the Multistep Richardson-Romberg method introduced in [Pa07] and the variance control resulting from the stratification introduced in the Multilevel Monte Carlo (MLMC) method (see [Hei01, Gi08]). Thus, in standard frameworks like discretization schemes of diffusion processes, the root mean squared error (RMSE) Δ>0\varepsilon > 0 can be achieved with our MLRR estimator with a global complexity of Δ−2log⁥(1/Δ)\varepsilon^{-2} \log(1/\varepsilon) instead of Δ−2(log⁥(1/Δ))2\varepsilon^{-2} (\log(1/\varepsilon))^2 with the standard MLMC method, at least when the weak error E[Yh]−E[Y0]\mathbf{E}[Y_h]-\mathbf{E}[Y_0] of the biased implemented estimator YhY_h can be expanded at any order in hh and ∄Yh−Y0∄2=O(h12)\|Y_h - Y_0\|_2 = O(h^{\frac{1}{2}}). The MLRR estimator is then halfway between a regular MLMC and a virtual unbiased Monte Carlo. When the strong error ∄Yh−Y0∄2=O(hÎČ2)\|Y_h - Y_0\|_2 = O(h^{\frac{\beta}{2}}), ÎČ<1\beta < 1, the gain of MLRR over MLMC becomes even more striking. We carry out numerical simulations to compare these estimators in two settings: vanilla and path-dependent option pricing by Monte Carlo simulation and the less classical Nested Monte Carlo simulation.Comment: 38 page
    • 

    corecore