75 research outputs found

    GeoComputational Intelligence and High-Performance Geospatial Computing

    Get PDF
    Assistant Professor, School of Natural Resources. Center for Advanced Land Management Information Technologies, University of Nebraska – LincolnPlatinum Sponsors Coca-Cola Gold Sponsors KU Department of Geography KU Institute for Policy & Social Research KU Libraries GIS and Data Services State of Kansas Data Access and Support Center (DASC) Wilson & Company Engineers and Architects Silver Sponsors Bartlett & West Kansas Applied Remote Sensing Program KansasView Bronze Sponsors Garmin KU Biodiversity Institut

    Geocomputation methods for spatial economic analysis

    Full text link
    Tésis doctoral inédita leída en la Universidad Autónoma de Madrid, Facultad de Ciencias Económicas y Empresariales, Departamento de Economía Aplicada. Fecha de lectura 18-02-2019Geocomputation is a new scientific paradigm that uses computational techniques to analyze spatial phenomena. Spatial economics and regional science quickly adopted geocomputation techniques to study the complex structures of urban and regional systems. This thesis contributes to the use of geocomputation in spatial economic analysis through construction and application of a new set of algorithms and functions in the R programming language to deal with spatial economic data. First, we created the ’DataSpa’ package, which collects data at low geographical levels to generate socio-economic information for Spanish municipalities using URL parsing, PDF extraction and web scraping. Second, based on a search and replace algorithm, we built the ’msp’ package to harmonize data with accuracy problems such as spelling errors, acronym abbreviations and names listed differently. This methodology enables study of the patenting activity and research collaboration in Chile between 1989- 2013. We also adapted classical spatial autocorrelation methods to visualize and explore the existence of productivity spillovers among the network’s members. Finally, we created ’estdaR’ to improve knowledge of Chile’s urban system by evaluating the influence of spatial proximity among human settlements on the evolution of cities. The package contains new tools for exploratory spatio-temporal data analysis that are very useful for detecting spatial differences in time trends. All R codes used in computation and the packages themselves are considered as research results and are freely available to other researchers in a Github repositoryLa Geocomputatión es un nuevo paradigma científico que utiliza métodos computacionales para analizar fenómenos espaciales. La economía espacial y la ciencia regional adoptaron rápidamente las técnicas de la geocomputación para estudiar las estructuras complejas de los sistemas urbanos y regionales. Esta Tesis constituye una contribución al campo de la geocomputación a través de la construcción y la aplicación al análisis económico espacial de un nuevo conjunto de algoritmos y funciones programadas en lenguaje R. En primer lugar, utilizando técnicas de análisis sintáctico de las URL, de extracción de textos en formato PDF y de “web scraping”, hemos desarrollado el paquete “DataSpa” que recopila información procedente de Internet necesaria para generar indicadores socioeconómicos para los municipios españoles. En segundo lugar, utilizando un algoritmo de búsqueda y remplazo se genera el paquete “msp” que permite arreglar textos con imprecisiones y errores de escritura en los acrónimos y nombres propios. De esta forma, fue posible estudiar las relaciones de colaboración empresarial y la actividad de I+D de las empresas chilenas, en el período 1989-2013, a través de las relaciones en materia de patentes. Adicionalmente, hemos adaptado métodos clásicos de autocorrelación espacial a este ámbito para explorar y visualizar la existencia de efectos de contagio en la productividad de la red de colaboración en la actividad de I+D entre las empresas. Finalmente, para mejorar el conocimiento del sistema urbano chileno, hemos evaluado la influencia que la proximidad espacial entre ciudades tiene en la evolución de su tamaño poblacional, a través del paquete “estdaR”, que contiene funciones para el análisis exploratorio de datos espacio-temporales que permiten para analizar diferencias espaciales en las tendencias temporales. Todos los códigos de R usados y los paquetes son considerados, en sí mismos, un resultado de la investigación y están libremente disponibles en un repositorio en Githu

    Designing visual analytics methods for massive collections of movement data

    Get PDF
    Exploration and analysis of large data sets cannot be carried out using purely visual means but require the involvement of database technologies, computerized data processing, and computational analysis methods. An appropriate combination of these technologies and methods with visualization may facilitate synergetic work of computer and human whereby the unique capabilities of each “partner” can be utilized. We suggest a systematic approach to defining what methods and techniques, and what ways of linking them, can appropriately support such a work. The main idea is that software tools prepare and visualize the data so that the human analyst can detect various types of patterns by looking at the visual displays. To facilitate the detection of patterns, we must understand what types of patterns may exist in the data (or, more exactly, in the underlying phenomenon). This study focuses on data describing movements of multiple discrete entities that change their positions in space while preserving their integrity and identity. We define the possible types of patterns in such movement data on the basis of an abstract model of the data as a mathematical function that maps entities and times onto spatial positions. Then, we look for data transformations, computations, and visualization techniques that can facilitate the detection of these types of patterns and are suitable for very large data sets – possibly too large for a computer's memory. Under such constraints, visualization is applied to data that have previously been aggregated and generalized by means of database operations and/or computational techniques

    Parameterization Of Turbulence Models Using 3DVAR Data Assimilation

    Full text link
    In this research the 3DVAR data assimilation scheme is implemented in the numerical model DIVAST in order to optimize the performance of the numerical model by selecting an appropriate turbulence scheme and tuning its parameters. Two turbulence closure schemes: the Prandtl mixing length model and the two-equation k-ε model were incorporated into DIVAST and examined with respect to their universality of application, complexity of solutions, computational efficiency and numerical stability. A square harbour with one symmetrical entrance subject to tide-induced flows was selected to investigate the structure of turbulent flows. The experimental part of the research was conducted in a tidal basin. A significant advantage of such laboratory experiment is a fully controlled environment where domain setup and forcing are user-defined. The research shows that the Prandtl mixing length model and the two-equation k-ε model, with default parameterization predefined according to literature recommendations, overestimate eddy viscosity which in turn results in a significant underestimation of velocity magnitudes in the harbour. The data assimilation of the model-predicted velocity and laboratory observations significantly improves model predictions for both turbulence models by adjusting modelled flows in the harbour to match de-errored observations. 3DVAR allows also to identify and quantify shortcomings of the numerical model. Such comprehensive analysis gives an optimal solution based on which numerical model parameters can be estimated. The process of turbulence model optimization by reparameterization and tuning towards optimal state led to new constants that may be potentially applied to complex turbulent flows, such as rapidly developing flows or recirculating flows

    Socioeconomic inequality of cancer mortality in the United States: a spatial data mining approach

    Get PDF
    BACKGROUND: The objective of this study was to demonstrate the use of an association rule mining approach to discover associations between selected socioeconomic variables and the four most leading causes of cancer mortality in the United States. An association rule mining algorithm was applied to extract associations between the 1988–1992 cancer mortality rates for colorectal, lung, breast, and prostate cancers defined at the Health Service Area level and selected socioeconomic variables from the 1990 United States census. Geographic information system technology was used to integrate these data which were defined at different spatial resolutions, and to visualize and analyze the results from the association rule mining process. RESULTS: Health Service Areas with high rates of low education, high unemployment, and low paying jobs were found to associate with higher rates of cancer mortality. CONCLUSION: Association rule mining with geographic information technology helps reveal the spatial patterns of socioeconomic inequality in cancer mortality in the United States and identify regions that need further attention

    Unraveling urban form and collision risk: The spatial distribution of traffic accidents in Zanjan, Iran

    Get PDF
    Official statistics demonstrate the role of traffic accidents in the increasing number of fa-talities, especially in emerging countries. In recent decades, the rate of deaths and injuries caused by traffic accidents in Iran, a rapidly growing economy in the Middle East, has risen significantly with respect to that of neighboring countries. The present study illustrates an exploratory spatial analysis’ framework aimed at identifying and ranking hazardous locations for traffic accidents in Zanjan, one of the most populous and dense cities in Iran. This framework quantifies the spatiotem-poral association among collisions, by comparing the results of different approaches (including Kernel Density Estimation (KDE), Natural Breaks Classification (NBC), and Knox test). Based on descriptive statistics, five distance classes (2–26, 27–57, 58–105, 106–192, and 193–364 meters) were tested when predicting location of the nearest collision within the same temporal unit. The empirical results of our work demonstrate that the largest roads and intersections in Zanjan had a significantly higher frequency of traffic accidents than the other locations. A comparative analysis of distance bandwidths indicates that the first (2–26 m) class concentrated the most intense level of spatiotem-poral association among traffic accidents. Prevention (or reduction) of traffic accidents may benefit from automatic identification and classification of the most risky locations in urban areas. Thanks to the larger availability of open-access datasets reporting the location and characteristics of car accidents in both advanced countries and emerging economies, our study demonstrates the potential of an integrated analysis of the level of spatiotemporal association in traffic collisions over metropolitan regions
    corecore