6,960 research outputs found

    The urban sprawl dynamics: does a neural network understand the spatial logic better than a cellular automata?

    Get PDF
    Cellular Automata are usually considered the most efficient technology to understand the spatial logic of urban dynamics: they are inherently spatial, they are simple and computationally efficient and are able to represent a wide range of pattern and situations. Nevertheless the implementation of a CA requires the formulation of explicit spatial rules which represents the greatest limit of this approach. Whatever rich and complex the rules are, they don`t are able to capture satisfactorily the variety of the real processes. Recent developments in natural algorithms, and particularly in Artificial Neural Networks (ANN), allow to reverse the approach by learning the rules and the behaviours in urban land use dynamics directly from the Data Base, following a bottom-up process. The basic problem is to discover how and in to what extent the land use change of each cell i at time t+1 is determined by the neighbouring conditions (CA assumptions) or by other social, environmental, territorial features (i.e. political maps, planning rules) which where holding at the previous time t. Once the NN has learned the rules, it is able to predict the changes at time t+2 and following. In this paper we show and discuss the prediction capability of different architectures of supervised and unsupervised ANN. The Case study and Data Base concern the land use dynamics, between two temporal thresholds, in the South metropolitan area of Milan. The records have been randomly split in two sets which have been alternatively used in Training and in Testing phase in each ANN. The different ANNs performances have been evaluated with Statistical Functions. Finally, for the prediction, we have used the average of the prediction values of the 10 ANNs, and tested the results through the usual Statistical Functions.

    Discriminatively Trained Latent Ordinal Model for Video Classification

    Full text link
    We study the problem of video classification for facial analysis and human action recognition. We propose a novel weakly supervised learning method that models the video as a sequence of automatically mined, discriminative sub-events (eg. onset and offset phase for "smile", running and jumping for "highjump"). The proposed model is inspired by the recent works on Multiple Instance Learning and latent SVM/HCRF -- it extends such frameworks to model the ordinal aspect in the videos, approximately. We obtain consistent improvements over relevant competitive baselines on four challenging and publicly available video based facial analysis datasets for prediction of expression, clinical pain and intent in dyadic conversations and on three challenging human action datasets. We also validate the method with qualitative results and show that they largely support the intuitions behind the method.Comment: Paper accepted in IEEE TPAMI. arXiv admin note: substantial text overlap with arXiv:1604.0150
    • …
    corecore