17 research outputs found

    The worldwide physical height datum project

    Get PDF
    AbstractThe definition of a common global vertical coordinate system is nowadays one of the key points in Geodesy. With the advent of GNSS, a coherent global height has been made available to users. The ellipsoidal height can be obtained with respect to a given geocentric ellipsoid in a fast and precise way using GNSS techniques. On the other hand, the traditional orthometric height is not coherent at global scale. Spirit levelling allows the estimation of height increments so that orthometric heights of surveyed points can be obtained starting from a benchmark of known orthometric heights. As it is well known, this vertical coordinate refers to the geoid, which is assumed to be coincident to the mean sea level. By means of a tide gauge, the mean sea level is estimated and thus a point of known orthometric height is defined. This assumption, which was acceptable in the past, became obsolete given the level of precision which is now required. Based on the altimetry observation, one can precisely quantify the existing discrepancy between geoid and mean sea level that can amount to 1 ÷ 2 m at global scale. Therefore, different tide gauges provide biased estimates of the geoid, given the discrepancy between this equipotential surface and the mean sea level. Also, in the last years, another vertical coordinate was used, the normal height that was introduced in the context of the Molodensky theory. In this paper, a review of the existing different height systems is given and the relationships among them are revised. Furthermore, an approach for unifying normal height referring to different tide gauges is presented and applied to the Italian test case. Finally, a method for defining a physical height system that is globally coherent is discussed in the context of the definition of the International Height Reference System/Frame, a project supported by the Global Geodetic Observing System of the International Association of Geodesy (IAG). This project was established in 2015 during the XXVI IAG General Assembly in Prague as described in IAG Resolution no. 1 that was presented and adopted there

    GNSS-based dam monitoring: The application of a statistical approach for time series analysis to a case study

    Get PDF
    Dams are one of the most important engineering works of the current human society, and it is crucial to monitor and obtain analytical data to log their conditions, predict their behavior and, eventually, receive early warnings for planning interventions and maintenance activities. In this context, GNSS-based point displacement monitoring is nowadays a consolidated technique that is able to provide daily millimeter level accuracy, even with less sophisticated and less expensive single-frequency equipment. If properly designed, daily records of such monitoring systems produce time series that, when long enough, allow for an accurate reconstruction of the geometrical deformation of the structure, thus guiding semi-automatic early warning systems. This paper focuses on the procedure for the GNSS time series processing with a statistical approach. In particular, real-world times series collected from a dam monitoring test case are processed as an example of data filtering. A remove–restore technique based on a collocation approach is applied here. Basically, it consists of an initial deterministic modeling by polynomials and periodical components through least squares adjustment and Fourier transform, respectively, followed by a stochastic modeling based on empirical covariance estimation and a collocation approach. Filtered time series are interpreted by autoregressive models based on environmental factors such as air or water temperature and reservoir water level. Spatial analysis is finally performed by computing correlations between displacements of the monitored points, as well as by visualizing the overall structure deformation in time. Results positively validate the proposed data processing workflow, providing useful hints for the implementation of automatic early warning systems in the framework of structural monitoring based on continuous displacement measurements

    Estimating and comparing dam deformation using classical and gnss techniques

    Get PDF
    Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations

    COVARIANCE FUNCTION MODELLING IN LOCAL GEODETIC APPLICATIONS USING THE SIMPLEX METHOD

    Get PDF
    Collocation has been widely applied in geodesy for estimating the gravity field of the Earth both locally and globally. Particularly, this is the standard geodetic method used to combine all the available data to get an integrated estimate of any functional of the anomalous potential T. The key point of the method is the definition of proper covariance functions of the data. Covariance function models have been proposed by many authors together with the related software. In this paper a new method for finding suitable covariance models has been devised. The covariance fitting problem is reduced to an optimization problem in Linear Programming and solved by using the Simplex Method. The procedure has been implemented in a FORTRAN95 software and has been tested on simulated and real data sets. These first tests proved that the proposed method is a reliable tool for estimating proper covariance function models to be used in the collocation procedure

    Covariance function modelling in local geodetic applications using the simplex method

    Get PDF
    Collocation has been widely applied in geodesy for estimating the gravity field of the Earth both locally and globally. Particularly, this is the standard geodetic method used to combine all the available data to get an integrated estimate of any functional of the anomalous potential T. The key point of the method is the definition of proper covariance functions of the data. Covariance function models have been proposed by many authors together with the related software. In this paper a new method for finding suitable covariance models has been devised. The covariance fitting problem is reduced to an optimization problem in Linear Programming and solved by using the Simplex Method. The procedure has been implemented in a FORTRAN95 software and has been tested on simulated and real data sets. These first tests proved that the proposed method is a reliable tool for estimating proper covariance function models to be used in the collocation procedure

    The Gravity Effect of Topography: A Comparison among Three Different Methods

    Get PDF
    In this paper, three different methods for computing the terrain correction have been compared. The terrain effect has been accounted for by using the standard right parallelepiped closed formula, the spherical tesseroid and the flat tesseroid formulas. Particularly, the flat tesseroid approximation is obtained by flattening the top and the bottom sides of the spherical tesseroid. Its gravitational effect can be computed as the gravitational effect of a polyhedron, i.e. a three-dimensional body with flat polygonal faces, straight edges and sharp corners or vertices. These three methods have been applied in the context of a Bouguer reduction scheme. Two tests were devised in the Alpine area in order to quantify possible discrepancies. In the first test, the terrain correction has been evaluated on a grid of points on the DTM. In the second test, Bouguer gravity anomalies were computed on sparse observed gravity data points. The results prove that the three methods are practically equivalent even in an area of rough topography though, in the second test, the Bouguer anomalies obtained by using the tesseroid and the flat tesseroid formulas have slightly smaller RMSs than the one obtained by applying the standard right parallelepiped formula

    Covariance models for geodetic applications of collocation

    No full text
    The recent gravity mission GOCE aims at measuring the global gravity field of the Earth with un-precedent accuracy. An improved description of gravity means improved knowledge in e.g. ocean circulation and climate and sea-level change with implications in areas such as geodesy and surveying. Through GOCE products, the low-medium frequency spectrum of the gravity field is properly estimated. This is enough to detect the main gravimetric structures but local applications are still questionable. GOCE data can be integrated with other kind of observations, having different features, frequency content, spatial coverage and resolution. Gravity anomalies (∆g) and geoid undulation (N) derived from radar-altimetry data (as well as GOCE T_rr) are all linear(ized) functional of the anomalous gravity potential (T). For local modeling of the gravity field, this useful connection can be used to integrate information of different observations, in order to obtain a better representation of high frequency, otherwise difficult to recover. The usual methodology is based on Collocation theory. The nodal problem of this approach is the correct modeling of the empirical covariance of the observations. Proper covariance models have been proposed by many authors. However, there are problems in fitting the empirical values when different functional of T are combined. The problem of modeling covariance functions has been dealt with an innovative methodology based on Linear Programming and the Simplex Algorithm. The results obtained during the test phase of this new methodology of modeling covariance function for local applications show improvements respect to the software packages available until now

    Joint Analysis of Cost and Energy Savings for Preliminary Design Alternative Assessment

    No full text
    The building sector plays a central role in addressing the problem of global energy consumption. Therefore, effective design measures need to be taken to ensure efficient usage and management of new structures. The challenging task for designers is to reduce energy demands while maintaining a high-quality indoor environment and low costs of construction and operations. This study proposes a methodological framework that enables decision-makers to resolve conflicts between energy demand and life cycle costs. A case study is analyzed to validate the proposed method, adopting different solutions for walls, roofs, floors, windows, window-to-wall ratios and geographical locations. Models are created on the basis of all the possible combinations between these elements, enriched by their thermal properties and construction/management costs. After the alternative models are defined, energy analyses are carried out for an estimation of consumption. By calculating the total cost of each model as the sum of construction, energy and maintenance costs, a joint analysis is carried out for variable life cycles. The obtained results from the proposed method confirm the importance of a preliminary assessment from both energy and cost points of view, and demonstrate the impact of considering different building life cycles on the choice of design alternatives

    Advancements on the use of the non local means algorithm for seismic data processing

    No full text
    Among the techniques and algorithms usually applied to seismic data for random noise attenuation, Non Local Means (NLM) filtering is a promising option. It is based on a weighted mean in which the weights depend on the measure of similarity between patches surrounding each sample. This methodology allows the preservation of features and structures while incoherent signal is filtered out. The application of such methodology can be n-dimensional but its high computational cost disadvantages a 3D implementation. In previous work we presented a revised version of the NLM algorithm, improved from both the computational and signal to noise enhancement points of view. In the present paper we focus on the application of this revised NLM on real data time-slices, and investigate more in detail the 3D implementation of the method
    corecore