15 research outputs found
GTE. A new software for gravitational terrain effect computation: theory and performances
The computation of the vertical attraction due to the topographic masses, the so-called Terrain Correction, is a fundamental step in geodetic and geophysical applications: it is required in high-precision geoid estimation by means of the remove–restore technique and it is used to isolate the gravitational effect of anomalous masses in geophysical exploration. The increasing resolution of recently developed digital terrain models, the increasing number of observation points due to extensive use of airborne gravimetry in geophysical exploration and the increasing accuracy of gravity data represents nowadays major issues for the terrain correction computation. Classical methods such as prism or point masses approximations are indeed too slow while Fourier based techniques are usually too approximate for the required accuracy. In this work a new software, called Gravity Terrain Effects (GTE), developed to guarantee high accuracy and fast computation of terrain corrections is presented. GTE has been thought expressly for geophysical applications allowing the computation not only of the effect of topographic and bathymetric masses but also those due to sedimentary layers or to the Earth crust-mantle discontinuity (the so-called Moho). In the present contribution, after recalling the main classical algorithms for the computation of the terrain correction we summarize the basic theory of the software and its practical implementation. Some tests to prove its performances are also described showing GTE capability to compute high accurate terrain corrections in a very short time: results obtained for a real airborne survey with GTE ranges between few hours and few minutes, according to the GTE profile used, with differences with respect to both planar and spherical computations (performed by prism and tesseroid respectively) of the order of 0.02 mGal even when using fastest profiles
European sessions and datum definitions
In this work we present some preliminary studies to assess a method to investigate the eect of
the selection of dierent datums on the adjustment of a geodetic network on a continental scale. In
particular we have considered European VLBI sessions. All the experiments since 1990 until the end
of 2010 have been processed. The adjustments, session by session, have been performed two times,
under the same analysis options but xing two slightly dierent datums. An analysis of estimated 3D
coordinates making use of the maximum eigenvalue calculated for each station and each experiment,
was carried out. The stability of the results and the in
uence of dierent datum choices on the goodness
of estimated coordinates have been investigated
Gis Techniques For Digital Surface Models Outlier Detection
In the present work strategies to cope with outliers detection are defined both for datasets stored on 2D lattices and on 1D profiles.
Assuming that (Hawkins D. M., 1980): "the outlier is an observation which deviates so much from other observations as to arouse suspicious that it was generated by different mechanism" we studied the problem of outlier detection in digital surface models, as first preprocessing and validation step. The methods proposed and the tools implemented have been applied to digital terrain models (DTMs), gridded geophysical data (gridded borehole depths, seismic velocities, amplitudes and phases, magnetic data, gravity data) but their use can be extended to data within different fields, as long as they represent surface models described by grid stored information.
We decided to implement the software to blunders detection by adding apposite tools in GRASS (Geographical Resources Analysis Support System).
The validation techniques are characterized by a common localization procedure: we examine the entire dataset by considering only a small subset at a time. Our basic hypothesis is that the values in the moving window (the mask) are observations affected by normal distributed white noise. An interpolating surface (a-priori model) is computed from the points surrounding the center of the moving mask (suspected blunder). The choice of the model determines the residual between the observation an the surface at the mask central point P0 and therefore the capability to detect the possible outlier. The surface model can be obtained in the following ways: polynomial interpolation, robust estimation by using the median, collocation (or kriging). For each of them we define an associated test in order to decide whether the point P0 is a blunder or not. The paper focus on the first approach
Multimodal navigation within multilayer-modeled Gregorian Chant information
Information entities can often be described from many different points of view. In the digital domain, a multimedia environment is fit to the goal of describing, as well as a multimodal framework is fit to the goal of enjoying those information entities which are rich in heterogeneous facets. The proposed approach can be applied to all the fields where symbolic, textual, graphical, audio and video contents are involved. This paper aims at deepening the concept of multimodal navigation for multilayer-modeled information. Addressing the specific case study of music, our work first presents a suitable format to encode heterogeneous information within a single document. Then it proposes a computer-based approach for analyzing recordings whose musical scores are known, identifying their semantic relationship in automatic way. Finally, the method is tested on a set of Gregorian chants, in order to produce a multimodal Web application
Clustering Spatio-Temporal Data Streams
A spatio-temporal data stream is a sequence of time-stamped geo-referenced data elements which arrive at consecutive time points. In addition to the spatial and temporal dimensions which are information bearing, stream poses further challenges to data mining, which are avoiding multiple scans of the entire data sets, optimizing memory usage, and mining only the most recent patterns. In this paper, we address the challenges of mining spatiotemporal data streams for a new class of space-time patterns, called trend-clusters. These patterns combine spatial clustering and trend discovery in stream environments. In particular, we propose a novel algorithm, called TRUST, which allows to retrieve groups of spatially continuous geo-referenced data which variate according to a close trend polyline in the recent window past. Experiments demonstrate the effectiveness of the proposed algorithm
Interpolazione del DTM HD-1
Il DTM nominato HD-1, in accordo a quanto definito nel precedente Capitolo, deve coprire tutta l'area di progetto e avere una risoluzione confrontabile con quella dei dati in input. Esso viene realizzato tramite interpolazione individuale dei tre DTM di Lombardia, Piemonte e Svizzera sui nodi della griglia creata appositamente in modo da soddisfare i requisiti richiesti. Per effettuare l'interpolazione viene utilizzato ove possibile un modello locale polinomiale bicubico a minimi quadrati e, per i nodi in cui tale approccio non fornisce una stima affidabile, la superficie polinomiale viene ridotta a una bilineare. Nelle aree in cui sono presenti più DTM di input i loro risultati individuali di interpolazione vengono mediati. Nel presente capitolo viene descritta nel dettaglio la procedura adottata per produrre il DTM unificato e per correggere le quote dei nodi che si trovano in aree lacuali. Vengono mostrati e discussi i risultati ottenut