94 research outputs found

    Approximate Confidence Regions for Minimax-Linear Estimators

    Get PDF
    Minimax estimation is based on the idea, that the quadratic risk function for the estimate β is not minimized over the entire parameter space R^K, but only over an area B(β) that is restricted by a priori knowledge. If all restrictions define a convex area, this area can often be enclosed in an ellipsoid of the form B(β) = { β : β' Tβ ≤ r }. The ellipsoid has a larger volume than the cuboid. Hence, the transition to an ellipsoid as a priori information represents a weakening, but comes with an easier mathematical handling. Deriving the linear Minimax estimator we see that it is biased and non-operationable. Using an approximation of the non-central χ^2-distribution and prior information on the variance, we get an operationable solution which is compared with OLSE with respect to the size of the corresponding confidence intervals

    ANALYTICAL QUALITY ASSESSMENT OF ITERATIVELY REWEIGHTED LEAST-SQUARES (IRLS) METHOD

    Get PDF
    The iteratively reweighted least-squares (IRLS) technique has been widelyemployed in geodetic and geophysical literature. The reliability measures areimportant diagnostic tools for inferring the strength of the model validation. Anexact analytical method is adopted to obtain insights on how much iterativereweighting can affect the quality indicators. Theoretical analyses and numericalresults show that, when the downweighting procedure is performed, (1) theprecision, all kinds of dilution of precision (DOP) metrics and the minimaldetectable bias (MDB) will become larger; (2) the variations of the bias-to-noiseratio (BNR) are involved, and (3) all these results coincide with those obtained bythe first-order approximation method

    DESIGN OF GEODETIC NETWORKS BASED ON OUTLIER IDENTIFICATION CRITERIA: AN EXAMPLE APPLIED TO THE LEVELING NETWORK

    Get PDF
    We present a numerical simulation method for designing geodetic networks. The quality criterion considered is based on the power of the test of data snooping testing procedure. This criterion expresses the probability of the data snooping to identify correctly an outlier. In general, the power of the test is defined theoretically. However, with the advent of the fast computers and large data storage systems, it can be estimated using numerical simulation. Here, the number of experiments in which the data snooping procedure identifies the outlier correctly is counted using Monte Carlos simulations. If the network configuration does not meet the reliability criterion at some part, then it can be improved by adding required observation to the surveying plan. The method does not use real observations. Thus, it depends on the geometrical configuration of the network; the uncertainty of the observations; and the size of outlier. The proposed method is demonstrated by practical application of one simulated leveling network. Results showed the needs of five additional observations between adjacent stations. The addition of these new observations improved the internal reliability of approximately 18%. Therefore, the final designed network must be able to identify and resist against the undetectable outliers – according to the probability levels

    Theory of carrier phase ambiguity resolution

    Get PDF
    Carrier phase ambiguity resolution is the key to high precision Global Navigation Satellite System (GNSS) positioning and navigation. It applies to a great variety of current and future models of GPS, modernized GPS and Galileo. A proper handling of carrier phase ambiguity resolution requires a proper understanding of the underlying theory of integer inference. In this contribution a brief review is given of the probabilistic theory of integer ambiguity estimation. We describe the concept of ambiguity pull-in regions, introduce the class of admissible integer estimators, determine their probability mass functions and show how their variability affect the uncertainty in the so-called ‘fixed’ baseline solution. The theory is worked out in more detail for integer least-squares and integer bootstrapping. It is shown that the integer least-squares principle maximizes the probability of correct integer estimation. Sharp and easy-to-compute bounds are given for both the ambiguity success rate and the baseline’s probability of concentration. Finally the probability density function of the ambiguity residuals is determined. This allows one for the first time to formulate rigorous tests for the integerness of the parameters

    Teoria de confiabilidade generalizada para múltiplos outliers: apresentação, discussão e comparação com a teoria convencional

    Get PDF
    Após o ajustamento de observações pelo método dos mínimos quadrados (MMQ) ter sido realizado, é possível a detecção e a identificação de erros não aleatórios nas observações, por meio de testes estatísticos. A teoria da confiabilidade faz uso de medidas adequadas para quantificar o menor erro detectável em uma observação, e a sua influência sobre os parâmetros ajustados, quando não detectado. A teoria de confiabilidade convencional foi desenvolvida para os procedimentos de teste convencionais, como o data snooping, que pressupõem que apenas uma observação está contaminada por erros grosseiros por vez. Recentemente foram desenvolvidas medidas de confiabilidade generalizadas, relativas a testes estatísticos que pressupõem a existência, simultânea, de múltiplas observações com erros (outliers). O objetivo deste trabalho é apresentar, aplicar e discutir a teoria de confiabilidade generalizada para múltiplos outliers. Além da formulação teórica, este artigo também apresenta experimentos realizados em uma rede GPS (Global Positioning System), onde erros propositais foram inseridos em algumas observações e medidas de confiabilidade e testes estatísticos foram calculados utilizando a abordagem para múltiplos outliers. Comparações com a teoria de confiabilidade convencional também são realizadas. Por fim, apresentam-se as discussões e conclusões obtidas com estes experimentos

    Non-stationary covariance function modelling in 2D least-squares collocation

    Get PDF
    Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the ob-served data. However, the assumption that the spatial dependence is constant through-out the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage fordealing with non-stationarity in geodetic data. We then compared stationary and non-stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC

    Recomposing consumption: defining necessities for sustainable and equitable well-being

    Get PDF
    This paper focuses on consumption in the affluent world and the resulting level, composition and distribution of consumption-based emissions. It argues that public policy should foster the recomposition of consumption, while not disadvantaging poorer groups in the population. To combine these two imperatives entails making a distinction between goods and services that are necessary for a basic level of well-being, and those that are surplus to this requirement. The argument proceeds in six stages. First, the paper outlines a theory of universal need, as an alternative conception of well-being to consumer preference satisfaction. Second, it proposes a dual strategy methodology for identifying need satisfiers or necessities in a given social context. Then, it applies this methodology to identify a minimum bundle of necessary consumption items in the UK and speculates how it might be used to identify a maximum bundle for sustainable consumption. The next part looks at corporate barriers and structural obstacles in the path of sustainable consumption. The following part reveals a further problem: mitigation policies can result in perverse distributional outcomes when operating in contexts of great inequality. The final section suggests four ecosocial public policies that would simultaneously advance sustainable and equitable consumption in rich nations

    Long-range chemical sensitivity in the sulfur K-edge X-ray absorption spectra of substituted thiophenes

    Get PDF
    © 2014 American Chemical Society. Thiophenes are the simplest aromatic sulfur-containing compounds and are stable and widespread in fossil fuels. Regulation of sulfur levels in fuels and emissions has become and continues to be ever more stringent as part of governments' efforts to address negative environmental impacts of sulfur dioxide. In turn, more effective removal methods are continually being sought. In a chemical sense, thiophenes are somewhat obdurate and hence their removal from fossil fuels poses problems for the industrial chemist. Sulfur K-edge X-ray absorption spectroscopy provides key information on thiophenic components in fuels. Here we present a systematic study of the spectroscopic sensitivity to chemical modifications of the thiophene system. We conclude that while the utility of sulfur K-edge X-ray absorption spectra in understanding the chemical composition of sulfur-containing fossil fuels has already been demonstrated, care must be exercised in interpreting these spectra because the assumption of an invariant spectrum for thiophenic forms may not always be valid

    Robustness analysis of geodetic networks in the case of correlated observations

    Get PDF
    GPS (or GNSS) networks are invaluable tools for monitoring natural hazards such as earthquakes. However, blunders in GPS observations may be mistakenly interpreted as deformation. Therefore, robust networks are needed in deformation monitoring using GPS networks. Robustness analysis is a natural merger of reliability and strain and defined as the ability to resist deformations caused by the maximum undetecle errors as determined from internal reliability analysis. However, to obtain rigorously correct results; the correlations among the observations must be considered while computing maximum undetectable errors. Therefore, we propose to use the normalized reliability numbers instead of redundancy numbers (Baarda's approach) in robustness analysis of a GPS network. A simple mathematical relation showing the ratio between uncorrelated and correlated cases for maximum undetectable error is derived. The same ratio is also valid for the displacements. Numerical results show that if correlations among observations are ignored, dramatically different displacements can be obtained depending on the size of multiple correlation coefficients. Furthermore, when normalized reliability numbers are small, displacements get large, i.e., observations with low reliability numbers cause bigger displacements compared to observations with high reliability numbers
    corecore