4 research outputs found

    Estimation from quantized Gaussian measurements: when and how to use dither

    Full text link
    Subtractive dither is a powerful method for removing the signal dependence of quantization noise for coarsely quantized signals. However, estimation from dithered measurements often naively applies the sample mean or midrange, even when the total noise is not well described with a Gaussian or uniform distribution. We show that the generalized Gaussian distribution approximately describes subtractively dithered, quantized samples of a Gaussian signal. Furthermore, a generalized Gaussian fit leads to simple estimators based on order statistics that match the performance of more complicated maximum likelihood estimators requiring iterative solvers. The order statistics-based estimators outperform both the sample mean and midrange for nontrivial sums of Gaussian and uniform noise. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. Specifically, we find subtractive dither to be beneficial when the ratio between the Gaussian standard deviation and quantization interval length is roughly less than one-third. When that ratio is also greater than 0.822/K^0.930 for the number of measurements K > 20, estimators we present are more efficient than the midrange.https://arxiv.org/abs/1811.06856Accepted manuscrip

    Robust detail-preserving signal extraction

    Get PDF
    We discuss robust filtering procedures for signal extraction from noisy time series. Particular attention is paid to the preservation of relevant signal details like abrupt shifts. moving averages and running medians are widely used but have shortcomings when large spikes (outliers) or trends occur. Modifications like modified trimmed means and linear median hybrid filters combine advantages of both approaches, but they do not completely overcome the difficulties. Better solutions can be based on robust regression techniques, which even work in real time because of increased computational power and faster algorithms. Reviewing previous work we present filters for robust signal extraction and discuss their merits for preserving trends, abrupt shifts and local extremes as well as for the removal of outliers

    Probabilistic modeling for single-photon lidar

    Full text link
    Lidar is an increasingly prevalent technology for depth sensing, with applications including scientific measurement and autonomous navigation systems. While conventional systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images, recent results for single-photon lidar (SPL) systems using single-photon avalanche diode (SPAD) detectors have shown accurate images formed from as little as one photon detection per pixel, even when half of those detections are due to uninformative ambient light. The keys to such photon-efficient image formation are two-fold: (i) a precise model of the probability distribution of photon detection times, and (ii) prior beliefs about the structure of natural scenes. Reducing the number of photons needed for accurate image formation enables faster, farther, and safer acquisition. Still, such photon-efficient systems are often limited to laboratory conditions more favorable than the real-world settings in which they would be deployed. This thesis focuses on expanding the photon detection time models to address challenging imaging scenarios and the effects of non-ideal acquisition equipment. The processing derived from these enhanced models, sometimes modified jointly with the acquisition hardware, surpasses the performance of state-of-the-art photon counting systems. We first address the problem of high levels of ambient light, which causes traditional depth and reflectivity estimators to fail. We achieve robustness to strong ambient light through a rigorously derived window-based censoring method that separates signal and background light detections. Spatial correlations both within and between depth and reflectivity images are encoded in superpixel constructions, which fill in holes caused by the censoring. Accurate depth and reflectivity images can then be formed with an average of 2 signal photons and 50 background photons per pixel, outperforming methods previously demonstrated at a signal-to-background ratio of 1. We next approach the problem of coarse temporal resolution for photon detection time measurements, which limits the precision of depth estimates. To achieve sub-bin depth precision, we propose a subtractively-dithered lidar implementation, which uses changing synchronization delays to shift the time-quantization bin edges. We examine the generic noise model resulting from dithering Gaussian-distributed signals and introduce a generalized Gaussian approximation to the noise distribution and simple order statistics-based depth estimators that take advantage of this model. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. We implement a dithered SPL system and propose a modification for non-Gaussian pulse shapes that outperforms the Gaussian assumption in practical experiments. The resulting dithered-lidar architecture could be used to design SPAD array detectors that can form precise depth estimates despite relaxed temporal quantization constraints. Finally, SPAD dead time effects have been considered a major limitation for fast data acquisition in SPL, since a commonly adopted approach for dead time mitigation is to operate in the low-flux regime where dead time effects can be ignored. We show that the empirical distribution of detection times converges to the stationary distribution of a Markov chain and demonstrate improvements in depth estimation and histogram correction using our Markov chain model. An example simulation shows that correctly compensating for dead times in a high-flux measurement can yield a 20-times speed up of data acquisition. The resulting accuracy at high photon flux could enable real-time applications such as autonomous navigation

    Modelling of propagation path loss using adaptive hybrid artificial neural network approach for outdoor environments.

    Get PDF
    Doctor of Philosophy in Electronic Engineering. University of KwaZulu-Natal. Durban, 2018.Prediction of signal power loss between transmitter and receiver with minimal error is an important issue in telecommunication network planning and optimization process. Some of the basic available conventional models in literature for signal power loss prediction includes the Free space, Lee, COST 234 Hata, Hata, Walficsh- Bertoni, Walficsh-Ikegami, dominant path and ITU models. But, due to poor prediction accuracy and lack of computational efficiency of these traditional models with propagated signal data in different cellular network environments, many researchers have shifted their focus to the domain of Artificial Neural Networks (ANNs) models. Different neural network architectures and models exist in literature, but the most popular one among them is the Multi-Layer Perceptron (MLP) ANN which can be attributed to its superb architecture and comparably clear algorithm. Though standard MLP networks have been employed to model and predict different signal data, they suffer due to the following fundamental drawbacks. Firstly, conventional MLP networks perform poorly in handling noisy data. Also, MLP networks lack capabilities in dealing with incoherence datasets which contracts with smoothness. Firstly, in this work, an adaptive neural network predictor which combines MLP and Adaptive Linear Element (ADALINE) is developed for enhanced signal power prediction. This is followed with a resourceful predictive model, built on MLP network with vector order statistic filter based pre-processing technique for improved prediction of measured signal power loss in different micro-cellular urban environments. The prediction accuracy of the proposed hybrid adaptive neural network predictor has been tested and evaluated using experimental field strength data acquired from Long Term Evolution (LTE) radio network environment with mixed residential, commercial and cluttered building structures. By means of first order statistical performance evaluation metrics using Correlation Coefficient, Root Mean Squared Error, Standard Deviation and Mean Absolute Error, the proposed adaptive hybrid approach provides a better prediction accuracy compared to the conventional MLP ANN prediction approach. The superior performance of the hybrid neural predictor can be attributed to its capability to learn, adaptively respond and predict the fluctuating patterns of the reference propagation loss data during training
    corecore