1,715 research outputs found

    Searchable Sky Coverage of Astronomical Observations: Footprints and Exposures

    Full text link
    Sky coverage is one of the most important pieces of information about astronomical observations. We discuss possible representations, and present algorithms to create and manipulate shapes consisting of generalized spherical polygons with arbitrary complexity and size on the celestial sphere. This shape specification integrates well with our Hierarchical Triangular Mesh indexing toolbox, whose performance and capabilities are enhanced by the advanced features presented here. Our portable implementation of the relevant spherical geometry routines comes with wrapper functions for database queries, which are currently being used within several scientific catalog archives including the Sloan Digital Sky Survey, the Galaxy Evolution Explorer and the Hubble Legacy Archive projects as well as the Footprint Service of the Virtual Observatory.Comment: 11 pages, 7 figures, submitted to PAS

    An analysis of the use of graphics for information retrieval

    Get PDF
    Several research groups have addressed the problem of retrieving vector graphics. This work has, however, focused either on domain-dependent areas or was based on very simple graphics languages. Here we take a fresh look at the issue of graphics retrieval in general and in particular at the tasks which retrieval systems must support. The paper presents a series of case studies which explored the needs of professionals in the hope that these needs can help direct future graphics IR research. Suggested modelling techniques for some of the graphic collections are also presented

    Annotating Object Instances with a Polygon-RNN

    Full text link
    We propose an approach for semi-automatic annotation of object instances. While most current methods treat object segmentation as a pixel-labeling problem, we here cast it as a polygon prediction task, mimicking how most current datasets have been annotated. In particular, our approach takes as input an image crop and sequentially produces vertices of the polygon outlining the object. This allows a human annotator to interfere at any time and correct a vertex if needed, producing as accurate segmentation as desired by the annotator. We show that our approach speeds up the annotation process by a factor of 4.7 across all classes in Cityscapes, while achieving 78.4% agreement in IoU with original ground-truth, matching the typical agreement between human annotators. For cars, our speed-up factor is 7.3 for an agreement of 82.2%. We further show generalization capabilities of our approach to unseen datasets

    Poly-GAN: Regularizing Polygons with Generative Adversarial Networks

    Get PDF
    Regularizing polygons involves simplifying irregular and noisy shapes of built environment objects (e.g. buildings) to ensure that they are accurately represented using a minimum number of vertices. It is a vital processing step when creating/transmitting online digital maps so that they occupy minimal storage space and bandwidth. This paper presents a data-driven and Deep Learning (DL) based approach for regularizing OpenStreetMap building polygon edges. The study introduces a building footprint regularization technique (Poly-GAN) that utilises a Generative Adversarial Network model trained on irregular building footprints and OSM vector data. The proposed method is particularly relevant for map features predicted by Machine Learning (ML) algorithms in the GIScience domain, where information overload remains a significant problem in many cartographic/LBS applications. It addresses the limitations of traditional cartographic regularization/generalization algorithms, which can struggle with producing both accurate and minimal representations of multisided built environment objects. Furthermore, future work proposes a way to test the method on even more complex object shapes to address this limitation

    Ground Truth for Layout Analysis Performance Evaluation

    No full text
    Over the past two decades a significant number of layout analysis (page segmentation and region classification) approaches have been proposed in the literature. Each approach has been devised for and/or evaluated using (usually small) application-specific datasets. While the need for objective performance evaluation of layout analysis algorithms is evident, there does not exist a suitable dataset with ground truth that reflects the realities of everyday documents (widely varying layouts, complex entities, colour, noise etc.). The most significant impediment is the creation of accurate and flexible (in representation) ground truth, a task that is costly and must be carefully designed. This paper discusses the issues related to the design, representation and creation of ground truth in the context of a realistic dataset developed by the authors. The effectiveness of the ground truth discussed in this paper has been successfully shown in its use for two international page segmentation competitions (ICDAR2003 and ICDAR2005)

    3D oceanographic data compression using 3D-ODETLAP

    Get PDF
    This paper describes a 3D environmental data compression technique for oceanographic datasets. With proper point selection, our method approximates uncompressed marine data using an over-determined system of linear equations based on, but essentially different from, the Laplacian partial differential equation. Then this approximation is refined via an error metric. These two steps work alternatively until a predefined satisfying approximation is found. Using several different datasets and metrics, we demonstrate that our method has an excellent compression ratio. To further evaluate our method, we compare it with 3D-SPIHT. 3D-ODETLAP averages 20% better compression than 3D-SPIHT on our eight test datasets, from World Ocean Atlas 2005. Our method provides up to approximately six times better compression on datasets with relatively small variance. Meanwhile, with the same approximate mean error, we demonstrate a significantly smaller maximum error compared to 3D-SPIHT and provide a feature to keep the maximum error under a user-defined limit

    Context-Based classification of objects in topographic data

    Get PDF
    Large-scale topographic databases model real world features as vector data objects. These can be point, line or area features. Each of these map objects is assigned to a descriptive class; for example, an area feature might be classed as a building, a garden or a road. Topographic data is subject to continual updates from cartographic surveys and ongoing quality improvement. One of the most important aspects of this is assignment and verification of class descriptions to each area feature. These attributes can be added manually, but, due to the vast volume of data involved, automated techniques are desirable to classify these polygons. Analogy is a key thought process that underpins learning and has been the subject of much research in the field of artificial intelligence (AI). An analogy identifies structural similarity between a well-known source domain and a less familiar target domain. In many cases, information present in the source can then be mapped to the target, yielding a better understanding of the latter. The solution of geometric analogy problems has been a fruitful area of AI research. We observe that there is a correlation between objects in geometric analogy problem domains and map features in topographic data. We describe two topographic area feature classification tools that use descriptions of neighbouring features to identify analogies between polygons: content vector matching (CVM) and context structure matching (CSM). CVM and CSM classify an area feature by matching its neighbourhood context against those of analogous polygons whose class is known. Both classifiers were implemented and then tested on high quality topographic polygon data supplied by Ordnance Survey (Great Britain). Area features were found to exhibit a high degree of variation in their neighbourhoods. CVM correctly classified 85.38% of the 79.03% of features it attempted to classify. The accuracy for CSM was 85.96% of the 62.96% of features it tried to identify. Thus, CVM can classify 25.53% more features than CSM, but is slightly less accurate. Both techniques excelled at identifying the feature classes that predominate in suburban data. Our structure-based classification approach may also benefit other types of spatial data, such as topographic line data, small-scale topographic data, raster data, architectural plans and circuit diagrams
    • …
    corecore