1,172 research outputs found

    Ontology of core concept data types for answering geo-analytical questions

    Get PDF
    In geographic information systems (GIS), analysts answer questions by designing workflows that transform a certain type of data into a certain type of goal. Semantic data types help constrain the application of computational methods to those that are meaningful for such a goal. This prevents pointless computations and helps analysts design effective workflows. Yet, to date it remains unclear which types would be needed in order to ease geo-analytical tasks. The data types and formats used in GIS still allow for huge amounts of syntactically possible but nonsensical method applications. Core concepts of spatial information and related geo-semantic distinctions have been proposed as abstractions to help analysts formulate analytic questions and to compute appropriate answers over geodata of different formats. In essence, core concepts reflect particular interpretations of data which imply that certain transformations are possible. However, core concepts usually remain implicit when operating on geodata, since a concept can be represented in a variety of forms. A central question therefore is: Which semantic types would be needed to capture this variety and its implications for geospatial analysis? In this article, we propose an ontology design pattern of core concept data types that help answer geo-analytical questions. Based on a scenario to compute a liveability atlas for Amsterdam, we show that diverse kinds of geo-analytical questions can be answered by this pattern in terms of valid, automatically constructible GIS workflows using standard sources

    Uncertainties in land use data

    Get PDF
    This paper deals with the description and assessment of uncertainties in land use data derived from Remote Sensing observations, in the context of hydrological studies. Land use is a categorical regionalised variable reporting the main socio-economic role each location has, where the role is inferred from the pattern of occupation of land. The properties of this pattern that are relevant to hydrological processes have to be known with some accuracy in order to obtain reliable results; hence, uncertainty in land use data may lead to uncertainty in model predictions. There are two main uncertainties surrounding land use data, positional and categorical. The first one is briefly addressed and the second one is explored in more depth, including the factors that influence it. We (1) argue that the conventional method used to assess categorical uncertainty, the confusion matrix, is insufficient to propagate uncertainty through distributed hydrologic models; (2) report some alternative methods to tackle this and other insufficiencies; (3) stress the role of metadata as a more reliable means to assess the degree of distrust with which these data should be used; and (4) suggest some practical recommendations

    Building Complex and Site Categorization Using Similarity to a Prototypical Site

    Get PDF
    This project presents an assessment tool for classifying building complexes using sitebased relationships as calculated from ArcGIS 9.2 using model builder and Python scripting. Anthropogenic features extracted from imagery often form the foundation of spatial databases. These data are in turn used to inform situational awareness for relief, law enforcement, and military agencies among many others. Buildings and the complexes they form are critical features within the landscape. The categorization of complexes requires an understanding of the relationships of the buildings within the site. In this study, building complexes in California were assessed for similarity to a prototypical California high school defined with a training set of known high schools and compared to a set of uncategorized sites. Eighty-eight percent of the high schools were correctly classified as being highly similar to the control data set

    Investigation of techniques for inventorying forested regions. Volume 2: Forestry information system requirements and joint use of remotely sensed and ancillary data

    Get PDF
    The author has identified the following significant results. Effects of terrain topography in mountainous forested regions on LANDSAT signals and classifier training were found to be significant. The aspect of sloping terrain relative to the sun's azimuth was the major cause of variability. A relative insolation factor could be defined which, in a single variable, represents the joint effects of slope and aspect and solar geometry on irradiance. Forest canopy reflectances were bound, both through simulation, and empirically, to have nondiffuse reflectance characteristics. Training procedures could be improved by stratifying in the space of ancillary variables and training in each stratum. Application of the Tasselled-Cap transformation for LANDSAT data acquired over forested terrain could provide a viable technique for data compression and convenient physical interpretations

    Basement fault trends in the Southern North Sea Basin

    Get PDF

    Spherical harmonics descriptor for 2D-image retrieval

    Full text link
    In this paper, spherical harmonics are proposed as shape descriptors for 2D images. We introduce the concept of connectivity; 2D images are decomposed using connectivity, which is followed by 3D model construction. Spherical harmonics are obtained for 3D models and used as descriptors for the underlying 2D shapes. Difference between two images is computed as the Euclidean distance between their spherical harmonics descriptors. Experiments are performed to test the effectiveness of spherical harmonics for retrieval of 2D images. Item S8 within the MPEG-7 still images content set is used for performing experiments; this dataset consists of 3621 still images. Experimental results show that the proposed descriptors for 2D images are effective<br /

    HIGH-RESOLUTION MAPPING OF HIERARCHICAL GREATER SAGE-GROUSE NESTING HABITAT: A GRAIN-SPECTRUM APPROACH IN NORTHWESTERN WYOMING

    Get PDF
    Our overall objective was to create a probabilistic nesting-habitat map for the Jackson Hole sage-grouse population that would have utility as a tool for future research, conservation, and management. The models that we developed for this purpose were specified to evaluate whether sage-grouse may be selecting nesting-habitat characteristics simultaneously at various spatial scales. Our spatially-explicit landscape-scale research was implemented primarily with readily available National Agriculture Imagery Program (NAIP) data. All nesting data was collected from 2007-2010. We tested how a broad range of grain sizes (spatial resolution) of covariate values affected the fit to logistic regression models used to estimate parameters for resource selection functions (RSFs). We analyzed habitat response signatures at three scales (extents) of analysis: (1) the nesting-patch scale, (2) the nesting-region scale, and (3) the nest-site scale. Akaike\u27s information criterion corrected for small sample sizes (AICc) and 5-fold cross validation were used to identify the most well-supported and predictive models at each scale. The RSF models were examined separately and then combined into a weighted scale-integrated conditional RSF (SRSF) integrating habitat selection signatures across all three scales. At the nesting-patch scale we determined that sage-grouse nesting occurrence was positively associated with the size of a patch, and the average cover for the patch. At the nesting-region scale, shrub cover of a 769-m-radius grain size was positively associated with nesting-region selection. Distance to tall objects and terrain ruggedness also appeared to influence nesting-region selection at this scale. At the nest-site scale shrub cover and landscape greenness were positively associated with nest-site selection. There was also noteworthy AICc support for terrain ruggedness at the nest-site scale. The SRSF provided a single high-resolution probabilistic GIS surface that mapped out areas that represent attractive sage-grouse nesting habitat

    Exploiting frame coherence in real-time rendering for energy-efficient GPUs

    Get PDF
    The computation capabilities of mobile GPUs have greatly evolved in the last generations, allowing real-time rendering of realistic scenes. However, the desire for processing complex environments clashes with the battery-operated nature of smartphones, for which users expect long operating times per charge and a low-enough temperature to comfortably hold them. Consequently, improving the energy-efficiency of mobile GPUs is paramount to fulfill both performance and low-power goals. The work of the processors from within the GPU and their accesses to off-chip memory are the main sources of energy consumption in graphics workloads. Yet most of this energy is spent in redundant computations, as the frame rate required to produce animations results in a sequence of extremely similar images. The goal of this thesis is to improve the energy-efficiency of mobile GPUs by designing micro-architectural mechanisms that leverage frame coherence in order to reduce the redundant computations and memory accesses inherent in graphics applications. First, we focus on reducing redundant color computations. Mobile GPUs typically employ an architecture called Tile-Based Rendering, in which the screen is divided into tiles that are independently rendered in on-chip buffers. It is common that more than 80% of the tiles produce exactly the same output between consecutive frames. We propose Rendering Elimination (RE), a mechanism that accurately determines such occurrences by computing and storing signatures of the inputs of all the tiles in a frame. If the signatures of a tile across consecutive frames are the same, the colors computed in the preceding frame are reused, saving all computations and memory accesses associated to the rendering of the tile. We show that RE vastly outperforms related schemes found in the literature, achieving a reduction of energy consumption of 37% and execution time of 33% with minimal overheads. Next, we focus on reducing redundant computations of fragments that will eventually not be visible. In real-time rendering, objects are processed in the order they are submitted to the GPU, which usually causes that the results of previously-computed objects are overwritten by new objects that turn occlude them. Consequently, whether or not a particular object will be occluded is not known until the entire scene has been processed. Based on the fact that visibility tends to remain constant across consecutive frames, we propose Early Visibility Resolution (EVR), a mechanism that predicts visibility based on information obtained in the preceding frame. EVR first computes and stores the depth of the farthest visible point after rendering each tile. Whenever a tile is rendered in the following frame, primitives that are farther from the observer than the stored depth are predicted to be occluded, and processed after the ones predicted to be visible. Additionally, this visibility prediction scheme is used to improve Rendering Elimination’s equal tile detection capabilities by not adding primitives predicted to be occluded in the signature. With minor hardware costs, EVR is shown to provide a reduction of energy consumption of 43% and execution time of 39%. Finally, we focus on reducing computations in tiles with low spatial frequencies. GPUs produce pixel colors by sampling triangles once per pixel and performing computations on each sampling location. However, most screen regions do not include sufficient detail to require high sampling rates, leading to a significant amount of energy wasted computing the same color for neighboring pixels. Given that spatial frequencies are maintained across frames, we propose Dynamic Sampling Rate, a mechanism that analyzes the spatial frequencies of tiles and determines the best sampling rate for them, which is applied in the following frame. Results show that Dynamic Sampling Rate significantly reduces processor activity, yielding energy savings of 40% and execution time reductions of 35%.La capacitat de càlcul de les GPU mòbils ha augmentat en gran mesura en les darreres generacions, permetent el renderitzat de paisatges complexos en temps real. Nogensmenys, el desig de processar escenes cada vegada més realistes xoca amb el fet que aquests dispositius funcionen amb bateries, i els usuaris n’esperen llargues durades i una temperatura prou baixa com per a ser agafats còmodament. En conseqüència, millorar l’eficiència energètica de les GPU mòbils és essencial per a aconseguir els objectius de rendiment i baix consum. Els processadors de la GPU i els seus accessos a memòria són els principals consumidors d’energia en càrregues gràfiques, però molt d’aquest consum és malbaratat en càlculs redundants, ja que les animacions produïdes s¿aconsegueixen renderitzant una seqüència d’imatges molt similars. L’objectiu d’aquesta tesi és millorar l’eficiència energètica de les GPU mòbils mitjançant el disseny de mecanismes microarquitectònics que aprofitin la coherència entre imatges per a reduir els càlculs i accessos redundants inherents a les aplicacions gràfiques. Primerament, ens centrem en reduir càlculs redundants de colors. A les GPU mòbils, sovint s'empra una arquitectura anomenada Tile-Based Rendering, en què la pantalla es divideix en regions que es processen independentment dins del xip. És habitual que més del 80% de les regions de pantalla produeixin els mateixos colors entre imatges consecutives. Proposem Rendering Elimination (RE), un mecanisme que determina acuradament aquests casos computant una signatura de les entrades de totes les regions. Si les signatures de dues imatges són iguals, es reutilitzen els colors calculats a la imatge anterior, el que estalvia tots els càlculs i accessos a memòria de la regió. RE supera àmpliament propostes relacionades de la literatura, aconseguint una reducció del consum energètic del 37% i del temps d’execució del 33%. Seguidament, ens centrem en reduir càlculs redundants en fragments que eventualment no seran visibles. En aplicacions gràfiques, els objectes es processen en l’ordre en què son enviats a la GPU, el que sovint causa que resultats ja processats siguin sobreescrits per nous objectes que els oclouen. Per tant, no se sap si un objecte serà visible o no fins que tota l’escena ha estat processada. Fonamentats en el fet que la visibilitat tendeix a ser constant entre imatges, proposem Early Visibility Resolution (EVR), un mecanisme que prediu la visibilitat basat en informació obtinguda a la imatge anterior. EVR computa i emmagatzema la profunditat del punt visible més llunyà després de processar cada regió de pantalla. Quan es processa una regió a la imatge següent, es prediu que les primitives més llunyanes a el punt guardat seran ocloses i es processen després de les que es prediuen que seran visibles. Addicionalment, aquest esquema de predicció s’empra en millorar la detecció de regions redundants de RE al no afegir les primitives que es prediu que seran ocloses a les signatures. Amb un cost de maquinari mínim, EVR aconsegueix una millora del consum energètic del 43% i del temps d’execució del 39%. Finalment, ens centrem a reduir càlculs en regions de pantalla amb poca freqüència espacial. Les GPU actuals produeixen colors mostrejant els triangles una vegada per cada píxel i fent càlculs a cada localització mostrejada. Però la majoria de regions no tenen suficient detall per a necessitar altes freqüències de mostreig, el que implica un malbaratament d’energia en el càlcul del mateix color en píxels adjacents. Com les freqüències tendeixen a mantenir-se en el temps, proposem Dynamic Sampling Rate (DSR)¸ un mecanisme que analitza les freqüències de les regions una vegada han estat renderitzades i en determina la menor freqüència de mostreig a la que es poden processar, que s’aplica a la següent imatge..
    • …
    corecore