32 research outputs found

    Computing NodeTrix Representations of Clustered Graphs

    Full text link
    NodeTrix representations are a popular way to visualize clustered graphs; they represent clusters as adjacency matrices and inter-cluster edges as curves connecting the matrix boundaries. We study the complexity of constructing NodeTrix representations focusing on planarity testing problems, and we show several NP-completeness results and some polynomial-time algorithms. Building on such algorithms we develop a JavaScript library for NodeTrix representations aimed at reducing the crossings between edges incident to the same matrix.Comment: Appears in the Proceedings of the 24th International Symposium on Graph Drawing and Network Visualization (GD 2016

    Performance of various homogenization tools on a synthetic benchmark dataset of GPS and ERA-interim IWV differences

    Get PDF
    PresentaciĂłn realizada en: IAG-IASPEI 39th Joint Scientific Assembly celebrada en Kobe, JapĂłn, del 30 de julio al 4 de agosto de 2017

    Study on homogenization of synthetic GNSS-Retrieved IWV time series and its impact on trend estimates with autoregressive noise

    Get PDF
    Póster presentado en: EGU General Assembly celebrada del 23 al 28 de abril de 2017 en Viena, Austria.A synthetic benchmark dataset of Integrated Water Vapour (IWV) was created within the activity of “Data homogenisation” of sub-working group WG3 of COST ES1206 Action. The benchmark dataset was created basing on the analysis of IWV differences retrieved by Global Positioning System (GPS) International GNSS Service (IGS) stations using European Centre for Medium-Range Weather Forecats (ECMWF) reanalysis data (ERA-Interim). Having analysed a set of 120 series of IWV differences (ERAI-GPS) derived for IGS stations, we delivered parameters of a number of gaps and breaks for every certain station. Moreover, we estimated values of trends, significant seasonalities and character of residuals when deterministic model was removed. We tested five different noise models and found that a combination of white and autoregressive processes of first order describes the stochastic part with a good accuracy. Basing on this analysis, we performed Monte Carlo simulations of 25 years long data with two different types of noise: white as well as combination of white and autoregressive processes. We also added few strictly defined offsets, creating three variants of synthetic dataset: easy, less complicated and fully complicated. The synthetic dataset we present was used as a benchmark to test various statistical tools in terms of homogenisation task. In this research, we assess the impact of the noise model, trend and gaps on the performance of statistical methods to detect simulated change points

    Homogenization of tropospheric data: evaluating the algorithms under the presence of autoregressive process

    Get PDF
    PresentaciĂłn realizada en: IX Hotine-Marussi Symposium celebrado en Roma del 18 al 22 de junio de 2018.This research was supported by the Polish National Science Centre, grant No. UMO-2016/21/B/ST10/02353

    Development of a longterm dataset of solid/liquid precipitation

    No full text
    Solid precipitation (mainly snow, but snow and ice pellets or hail as well), is an important parameter for climate studies. But as this parameter usually is not available operationally before the second part of the 20th century and nowadays is not reported by automatic stations, information usable for long term climate studies is rare. Therefore a proxy for the fraction of solid precipitation based on a nonlinear relationship between the percentage of solid precipitation and monthly mean temperature was developed for the Greater Alpine Region of Europe and applied to the existing longterm high resolution temperature and precipitation grids (5 arcmin). In this paper the method is introduced and some examples of the resulting datasets available at monthly resolution for 1800–2003 are given
    corecore