18,038 research outputs found

    Robust State Space Filtering under Incremental Model Perturbations Subject to a Relative Entropy Tolerance

    Full text link
    This paper considers robust filtering for a nominal Gaussian state-space model, when a relative entropy tolerance is applied to each time increment of a dynamical model. The problem is formulated as a dynamic minimax game where the maximizer adopts a myopic strategy. This game is shown to admit a saddle point whose structure is characterized by applying and extending results presented earlier in [1] for static least-squares estimation. The resulting minimax filter takes the form of a risk-sensitive filter with a time varying risk sensitivity parameter, which depends on the tolerance bound applied to the model dynamics and observations at the corresponding time index. The least-favorable model is constructed and used to evaluate the performance of alternative filters. Simulations comparing the proposed risk-sensitive filter to a standard Kalman filter show a significant performance advantage when applied to the least-favorable model, and only a small performance loss for the nominal model

    Image formation in synthetic aperture radio telescopes

    Full text link
    Next generation radio telescopes will be much larger, more sensitive, have much larger observation bandwidth and will be capable of pointing multiple beams simultaneously. Obtaining the sensitivity, resolution and dynamic range supported by the receivers requires the development of new signal processing techniques for array and atmospheric calibration as well as new imaging techniques that are both more accurate and computationally efficient since data volumes will be much larger. This paper provides a tutorial overview of existing image formation techniques and outlines some of the future directions needed for information extraction from future radio telescopes. We describe the imaging process from measurement equation until deconvolution, both as a Fourier inversion problem and as an array processing estimation problem. The latter formulation enables the development of more advanced techniques based on state of the art array processing. We demonstrate the techniques on simulated and measured radio telescope data.Comment: 12 page

    Robust Hypothesis Testing with a Relative Entropy Tolerance

    Full text link
    This paper considers the design of a minimax test for two hypotheses where the actual probability densities of the observations are located in neighborhoods obtained by placing a bound on the relative entropy between actual and nominal densities. The minimax problem admits a saddle point which is characterized. The robust test applies a nonlinear transformation which flattens the nominal likelihood ratio in the vicinity of one. Results are illustrated by considering the transmission of binary data in the presence of additive noise.Comment: 14 pages, 5 figures, submitted to the IEEE Transactions on Information Theory, July 2007, revised April 200

    GME versus OLS - Which is the best to estimate utility functions?

    Get PDF
    This paper estimates von Neumann andMorgenstern utility functions comparing the generalized maximum entropy (GME) with OLS, using data obtained by utility elicitation methods. Thus, it provides a comparison of the performance of the two estimators in a real data small sample setup. The results confirm the ones obtained for small samples through Monte Carlo simulations. The difference between the two estimators is small and it decreases as the width of the parameter support vector increases. Moreover the GME estimator is more precise than the OLS one. Overall the results suggest that GME is an interesting alternative to OLS in the estimation of utility functions when data is generated by utility elicitation methods.Generalized maximum entropy; Maximum entropy principle; von Neumann and Morgenstern utility; Utility elicitation.

    Parametric high resolution techniques for radio astronomical imaging

    Full text link
    The increased sensitivity of future radio telescopes will result in requirements for higher dynamic range within the image as well as better resolution and immunity to interference. In this paper we propose a new matrix formulation of the imaging equation in the cases of non co-planar arrays and polarimetric measurements. Then we improve our parametric imaging techniques in terms of resolution and estimation accuracy. This is done by enhancing both the MVDR parametric imaging, introducing alternative dirty images and by introducing better power estimates based on least squares, with positive semi-definite constraints. We also discuss the use of robust Capon beamforming and semi-definite programming for solving the self-calibration problem. Additionally we provide statistical analysis of the bias of the MVDR beamformer for the case of moving array, which serves as a first step in analyzing iterative approaches such as CLEAN and the techniques proposed in this paper. Finally we demonstrate a full deconvolution process based on the parametric imaging techniques and show its improved resolution and sensitivity compared to the CLEAN method.Comment: To appear in IEEE Journal of Selected Topics in Signal Processing, Special issue on Signal Processing for Astronomy and space research. 30 page

    Robust scaling in fusion science: case study for the L-H power threshold

    Get PDF
    In regression analysis for deriving scaling laws in the context of fusion studies, standard regression methods are usually applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to fusion data. More sophisticated statistical techniques are available, but they are not widely used in the fusion community and, moreover, the predictions by scaling laws may vary significantly depending on the particular regression technique. Therefore we have developed a new regression method, which we call geodesic least squares regression (GLS), that is robust in the presence of significant uncertainty on both the data and the regression model. The method is based on probabilistic modeling of all variables involved in the scaling expression, using adequate probability distributions and a natural similarity measure between them (geodesic distance). In this work we revisit the scaling law for the power threshold for the L-to-H transition in tokamaks, using data from the multi-machine ITPA databases. Depending on model assumptions, OLS can yield different predictions of the power threshold for ITER. In contrast, GLS regression delivers consistent results. Consequently, given the ubiquity and importance of scaling laws and parametric dependence studies in fusion research, GLS regression is proposed as a robust and easily implemented alternative to classic regression techniques
    • ā€¦
    corecore