709 research outputs found

    MULTI AGENT-BASED ENVIRONMENTAL LANDSCAPE (MABEL) - AN ARTIFICIAL INTELLIGENCE SIMULATION MODEL: SOME EARLY ASSESSMENTS

    Get PDF
    The Multi Agent-Based Environmental Landscape model (MABEL) introduces a Distributed Artificial Intelligence (DAI) systemic methodology, to simulate land use and transformation changes over time and space. Computational agents represent abstract relations among geographic, environmental, human and socio-economic variables, with respect to land transformation pattern changes. A multi-agent environment is developed providing task-nonspecific problem-solving abilities, flexibility on achieving goals and representing existing relations observed in real-world scenarios, and goal-based efficiency. Intelligent MABEL agents acquire spatial expressions and perform specific tasks demonstrating autonomy, environmental interactions, communication and cooperation, reactivity and proactivity, reasoning and learning capabilities. Their decisions maximize both task-specific marginal utility for their actions and joint, weighted marginal utility for their time-stepping. Agent behavior is achieved by personalizing a dynamic utility-based knowledge base through sequential GIS filtering, probability-distributed weighting, joint probability Bayesian correlational weighting, and goal-based distributional properties, applied to socio-economic and behavioral criteria. First-order logics, heuristics and appropriation of time-step sequences employed, provide a simulation-able environment, capable of re-generating space-time evolution of the agents.Environmental Economics and Policy,

    A Review of Machine Learning Approaches for Real Estate Valuation

    Get PDF
    Real estate managers must identify the value for properties in their current market. Traditionally, this involved simple data analysis with adjustments made based on manager’s experience. Given the amount of money currently involved in these decisions, and the complexity and speed at which valuation decisions must be made, machine learning technologies provide a newer alternative for property valuation that could improve upon traditional methods. This study utilizes a systematic literature review methodology to identify published studies from the past two decades where specific machine learning technologies have been applied to the property valuation task. We develop a data, reasoning, usefulness (DRU) framework that provides a set of theoretical and practice-based criteria for a multi-faceted performance assessment for each system. This assessment provides the basis for identifying the current state of research in this domain as well as theoretical and practical implications and directions for future research

    G-CSC Report 2010

    Get PDF
    The present report gives a short summary of the research of the Goethe Center for Scientific Computing (G-CSC) of the Goethe University Frankfurt. G-CSC aims at developing and applying methods and tools for modelling and numerical simulation of problems from empirical science and technology. In particular, fast solvers for partial differential equations (i.e. pde) such as robust, parallel, and adaptive multigrid methods and numerical methods for stochastic differential equations are developed. These methods are highly adanvced and allow to solve complex problems.. The G-CSC is organised in departments and interdisciplinary research groups. Departments are localised directly at the G-CSC, while the task of interdisciplinary research groups is to bridge disciplines and to bring scientists form different departments together. Currently, G-CSC consists of the department Simulation and Modelling and the interdisciplinary research group Computational Finance

    IIMA 2018 Proceedings

    Get PDF

    Resource Allocation through Auction-based Incentive Scheme for Federated Learning in Mobile Edge Computing

    Get PDF
    openMobile Edge Computing (MEC) combinedly with Federated Learning is con- sidered as most capable solutions to AI-driven services. Most of the studies focus on Federated Learning on security aspects and performance, but the re- search is lacking to establish an incentive mechanism for the devices that are connected with a server to perform different task. In MEC, edge nodes would not participate voluntarily in learning process, nodes differ in the accusation of multi-dimensional resources, which also affects the performance of federated learning. In a competitive market scenario, the auction game theory has been widely popular for designing efficient resource allocation mechanisms, as it particularly focuses on regulating the strategic interactions among the self-interested play- ers.In this thesis, I investigate auction-based approach that based on incentive mechanism and encourage nodes to share their resources and take part in train- ing process as well as to maximize the auction revenue. To achieve this research goal, I developed auction mechanism considering the network dynamics and neglecting the devices computation and design a novel generalized first price auction mechanism to encourage participation of connected devices. Furthermore, I studied the K top best-response bidding strategies that maximize the profits of the resource sellers and guarantee the stability and effectiveness of the auction by satisfying desired economic properties. To this end, I validate the performance of the proposed auction mechanisms and bidding strategies through numerical result analysis.Mobile Edge Computing (MEC) combinedly with Federated Learning is con- sidered as most capable solutions to AI-driven services. Most of the studies focus on Federated Learning on security aspects and performance, but the re- search is lacking to establish an incentive mechanism for the devices that are connected with a server to perform different task. In MEC, edge nodes would not participate voluntarily in learning process, nodes differ in the accusation of multi-dimensional resources, which also affects the performance of federated learning. In a competitive market scenario, the auction game theory has been widely popular for designing efficient resource allocation mechanisms, as it particularly focuses on regulating the strategic interactions among the self-interested play- ers.In this thesis, I investigate auction-based approach that based on incentive mechanism and encourage nodes to share their resources and take part in train- ing process as well as to maximize the auction revenue. To achieve this research goal, I developed auction mechanism considering the network dynamics and neglecting the devices computation and design a novel generalized first price auction mechanism to encourage participation of connected devices. Furthermore, I studied the K top best-response bidding strategies that maximize the profits of the resource sellers and guarantee the stability and effectiveness of the auction by satisfying desired economic properties. To this end, I validate the performance of the proposed auction mechanisms and bidding strategies through numerical result analysis

    Combining evolutionary algorithms and agent-based simulation for the development of urbanisation policies

    Get PDF
    Urban-planning authorities continually face the problem of optimising the allocation of green space over time in developing urban environments. To help in these decision-making processes, this thesis provides an empirical study of using evolutionary approaches to solve sequential decision making problems under uncertainty in stochastic environments. To achieve this goal, this work is underpinned by developing a theoretical framework based on the economic model of Alonso and the associated methodology for modelling spatial and temporal urban growth, in order to better understand the complexity inherent in this kind of system and to generate and improve relevant knowledge for the urban planning community. The model was hybridised with cellular automata and agent-based model and extended to encompass green space planning based on urban cost and satisfaction. Monte Carlo sampling techniques and the use of the urban model as a surrogate tool were the two main elements investigated and applied to overcome the noise and uncertainty derived from dealing with future trends and expectations. Once the evolutionary algorithms were equipped with these mechanisms, the problem under consideration was defined and characterised as a type of adaptive submodular. Afterwards, the performance of a non-adaptive evolutionary approach with a random search and a very smart greedy algorithm was compared and in which way the complexity that is linked with the configuration of the problem modifies the performance of both algorithms was analysed. Later on, the application of very distinct frameworks incorporating evolutionary algorithm approaches for this problem was explored: (i) an ‘offline’ approach, in which a candidate solution encodes a complete set of decisions, which is then evaluated by full simulation, and (ii) an ‘online’ approach which involves a sequential series of optimizations, each making only a single decision, and starting its simulations from the endpoint of the previous run

    Creating Sustainable Neighborhood Design for Legacy Cities: A New Framework for Sustainability Assessment

    Full text link
    Highly vacant neighborhoods present challenges for balancing social, environmental, and economic considerations for land reuse. Since the 1960’s, many post-Industrial cities such as Detroit have seen extreme population decline, creating severe economic loss and disinvestment in their communities. Strategies and opportunities for stabilization and revitalization, especially those that can be created and implemented by community groups, have become particularly important in these legacy (shrinking) cities. This report uses a case study site on the Lower East Side of Detroit to examine how the Community Development Advocates of Detroit (CDAD) Strategic Framework, a new land use and development framework for highly vacant cities, can be used to influence the Leadership in Energy and Environmental Design for Neighborhood Development (LEED-ND) criteria to allow it to better consider the social, economic, and environmental context of a legacy city. The land use typology described in CDAD’s Strategic Framework inform the criteria in the LEED-ND valuation tool measuring the sustainability of a neighborhood in order to create a new framework: Sustainable Neighborhood Development for Legacy Cities (SND-LC). SND-LC provides recommendations to further integrate social capital, social equity, and ecological considerations into the two frameworks through various planning and design techniques. Joan Nassauer’s concept of “cues to care” is instrumental for examining social capital in vacant neighborhoods and in identifying opportunities to grow social networks. Recommendations for land use reconsiderations call for the integration of social variables such as neighborhood cohesion and access to resources, as well as ecological variables such as stormwater and green space connectivity. Recommendations in SND-LC encourage retrofitting and illustrate how sustainability can be achieved through a more strategic use of vacant land areas rather than through compactness or new development. The new credit rating system applies the economic and social conditions of a legacy city to a new valuation system that can allow highly vacant neighborhoods across the country to achieve a sustainable neighborhood status.Master of Landscape ArchitectureMaster of ScienceNatural Resources and EnvironmentUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/90939/1/Final OAP Completed Project 4.26.pd

    Remote Sensing for Land Administration 2.0

    Get PDF
    The reprint “Land Administration 2.0” is an extension of the previous reprint “Remote Sensing for Land Administration”, another Special Issue in Remote Sensing. This reprint unpacks the responsible use and integration of emerging remote sensing techniques into the domain of land administration, including land registration, cadastre, land use planning, land valuation, land taxation, and land development. The title was chosen as “Land Administration 2.0” in reference to both this Special Issue being the second volume on the topic “Land Administration” and the next-generation requirements of land administration including demands for 3D, indoor, underground, real-time, high-accuracy, lower-cost, and interoperable land data and information

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    An uncertainty prediction approach for active learning - application to earth observation

    Get PDF
    Mapping land cover and land usage dynamics are crucial in remote sensing since farmers are encouraged to either intensify or extend crop use due to the ongoing rise in the world’s population. A major issue in this area is interpreting and classifying a scene captured in high-resolution satellite imagery. Several methods have been put forth, including neural networks which generate data-dependent models (i.e. model is biased toward data) and static rule-based approaches with thresholds which are limited in terms of diversity(i.e. model lacks diversity in terms of rules). However, the problem of having a machine learning model that, given a large amount of training data, can classify multiple classes over different geographic Sentinel-2 imagery that out scales existing approaches remains open. On the other hand, supervised machine learning has evolved into an essential part of many areas due to the increasing number of labeled datasets. Examples include creating classifiers for applications that recognize images and voices, anticipate traffic, propose products, act as a virtual personal assistant and detect online fraud, among many more. Since these classifiers are highly dependent from the training datasets, without human interaction or accurate labels, the performance of these generated classifiers with unseen observations is uncertain. Thus, researchers attempted to evaluate a number of independent models using a statistical distance. However, the problem of, given a train-test split and classifiers modeled over the train set, identifying a prediction error using the relation between train and test sets remains open. Moreover, while some training data is essential for supervised machine learning, what happens if there is insufficient labeled data? After all, assigning labels to unlabeled datasets is a time-consuming process that may need significant expert human involvement. When there aren’t enough expert manual labels accessible for the vast amount of openly available data, active learning becomes crucial. However, given a large amount of training and unlabeled datasets, having an active learning model that can reduce the training cost of the classifier and at the same time assist in labeling new data points remains an open problem. From the experimental approaches and findings, the main research contributions, which concentrate on the issue of optical satellite image scene classification include: building labeled Sentinel-2 datasets with surface reflectance values; proposal of machine learning models for pixel-based image scene classification; proposal of a statistical distance based Evidence Function Model (EFM) to detect ML models misclassification; and proposal of a generalised sampling approach for active learning that, together with the EFM enables a way of determining the most informative examples. Firstly, using a manually annotated Sentinel-2 dataset, Machine Learning (ML) models for scene classification were developed and their performance was compared to Sen2Cor the reference package from the European Space Agency – a micro-F1 value of 84% was attained by the ML model, which is a significant improvement over the corresponding Sen2Cor performance of 59%. Secondly, to quantify the misclassification of the ML models, the Mahalanobis distance-based EFM was devised. This model achieved, for the labeled Sentinel-2 dataset, a micro-F1 of 67.89% for misclassification detection. Lastly, EFM was engineered as a sampling strategy for active learning leading to an approach that attains the same level of accuracy with only 0.02% of the total training samples when compared to a classifier trained with the full training set. With the help of the above-mentioned research contributions, we were able to provide an open-source Sentinel-2 image scene classification package which consists of ready-touse Python scripts and a ML model that classifies Sentinel-2 L1C images generating a 20m-resolution RGB image with the six studied classes (Cloud, Cirrus, Shadow, Snow, Water, and Other) giving academics a straightforward method for rapidly and effectively classifying Sentinel-2 scene images. Additionally, an active learning approach that uses, as sampling strategy, the observed prediction uncertainty given by EFM, will allow labeling only the most informative points to be used as input to build classifiers; Sumário: Uma Abordagem de Previsão de Incerteza para Aprendizagem Ativa – Aplicação à Observação da Terra O mapeamento da cobertura do solo e a dinâmica da utilização do solo são cruciais na deteção remota uma vez que os agricultores são incentivados a intensificar ou estender as culturas devido ao aumento contínuo da população mundial. Uma questão importante nesta área é interpretar e classificar cenas capturadas em imagens de satélite de alta resolução. Várias aproximações têm sido propostas incluindo a utilização de redes neuronais que produzem modelos dependentes dos dados (ou seja, o modelo é tendencioso em relação aos dados) e aproximações baseadas em regras que apresentam restrições de diversidade (ou seja, o modelo carece de diversidade em termos de regras). No entanto, a criação de um modelo de aprendizagem automática que, dada uma uma grande quantidade de dados de treino, é capaz de classificar, com desempenho superior, as imagens do Sentinel-2 em diferentes áreas geográficas permanece um problema em aberto. Por outro lado, têm sido utilizadas técnicas de aprendizagem supervisionada na resolução de problemas nas mais diversas áreas de devido à proliferação de conjuntos de dados etiquetados. Exemplos disto incluem classificadores para aplicações que reconhecem imagem e voz, antecipam tráfego, propõem produtos, atuam como assistentes pessoais virtuais e detetam fraudes online, entre muitos outros. Uma vez que estes classificadores são fortemente dependente do conjunto de dados de treino, sem interação humana ou etiquetas precisas, o seu desempenho sobre novos dados é incerta. Neste sentido existem propostas para avaliar modelos independentes usando uma distância estatística. No entanto, o problema de, dada uma divisão de treino-teste e um classificador, identificar o erro de previsão usando a relação entre aqueles conjuntos, permanece aberto. Mais ainda, embora alguns dados de treino sejam essenciais para a aprendizagem supervisionada, o que acontece quando a quantidade de dados etiquetados é insuficiente? Afinal, atribuir etiquetas é um processo demorado e que exige perícia, o que se traduz num envolvimento humano significativo. Quando a quantidade de dados etiquetados manualmente por peritos é insuficiente a aprendizagem ativa torna-se crucial. No entanto, dada uma grande quantidade dados de treino não etiquetados, ter um modelo de aprendizagem ativa que reduz o custo de treino do classificador e, ao mesmo tempo, auxilia a etiquetagem de novas observações permanece um problema em aberto. A partir das abordagens e estudos experimentais, as principais contribuições deste trabalho, que se concentra na classificação de cenas de imagens de satélite óptico incluem: criação de conjuntos de dados Sentinel-2 etiquetados, com valores de refletância de superfície; proposta de modelos de aprendizagem automática baseados em pixels para classificação de cenas de imagens de satétite; proposta de um Modelo de Função de Evidência (EFM) baseado numa distância estatística para detetar erros de classificação de modelos de aprendizagem; e proposta de uma abordagem de amostragem generalizada para aprendizagem ativa que, em conjunto com o EFM, possibilita uma forma de determinar os exemplos mais informativos. Em primeiro lugar, usando um conjunto de dados Sentinel-2 etiquetado manualmente, foram desenvolvidos modelos de Aprendizagem Automática (AA) para classificação de cenas e seu desempenho foi comparado com o do Sen2Cor – o produto de referência da Agência Espacial Europeia – tendo sido alcançado um valor de micro-F1 de 84% pelo classificador, o que representa uma melhoria significativa em relação ao desempenho Sen2Cor correspondente, de 59%. Em segundo lugar, para quantificar o erro de classificação dos modelos de AA, foi concebido o Modelo de Função de Evidência baseado na distância de Mahalanobis. Este modelo conseguiu, para o conjunto de dados etiquetado do Sentinel-2 um micro-F1 de 67,89% na deteção de classificação incorreta. Por fim, o EFM foi utilizado como uma estratégia de amostragem para a aprendizagem ativa, uma abordagem que permitiu atingir o mesmo nível de desempenho com apenas 0,02% do total de exemplos de treino quando comparado com um classificador treinado com o conjunto de treino completo. Com a ajuda das contribuições acima mencionadas, foi possível desenvolver um pacote de código aberto para classificação de cenas de imagens Sentinel-2 que, utilizando num conjunto de scripts Python, um modelo de classificação, e uma imagem Sentinel-2 L1C, gera a imagem RGB correspondente (com resolução de 20m) com as seis classes estudadas (Cloud, Cirrus, Shadow, Snow, Water e Other), disponibilizando à academia um método direto para a classificação de cenas de imagens do Sentinel-2 rápida e eficaz. Além disso, a abordagem de aprendizagem ativa que usa, como estratégia de amostragem, a deteção de classificacão incorreta dada pelo EFM, permite etiquetar apenas os pontos mais informativos a serem usados como entrada na construção de classificadores
    corecore