41,802 research outputs found

    Neural Network Modelling of Constrained Spatial Interaction Flows

    Get PDF
    Fundamental to regional science is the subject of spatial interaction. GeoComputation - a new research paradigm that represents the convergence of the disciplines of computer science, geographic information science, mathematics and statistics - has brought many scholars back to spatial interaction modeling. Neural spatial interaction modeling represents a clear break with traditional methods used for explicating spatial interaction. Neural spatial interaction models are termed neural in the sense that they are based on neurocomputing. They are clearly related to conventional unconstrained spatial interaction models of the gravity type, and under commonly met conditions they can be understood as a special class of general feedforward neural network models with a single hidden layer and sigmoidal transfer functions (Fischer 1998). These models have been used to model journey-to-work flows and telecommunications traffic (Fischer and Gopal 1994, Openshaw 1993). They appear to provide superior levels of performance when compared with unconstrained conventional models. In many practical situations, however, we have - in addition to the spatial interaction data itself - some information about various accounting constraints on the predicted flows. In principle, there are two ways to incorporate accounting constraints in neural spatial interaction modeling. The required constraint properties can be built into the post-processing stage, or they can be built directly into the model structure. While the first way is relatively straightforward, it suffers from the disadvantage of being inefficient. It will also result in a model which does not inherently respect the constraints. Thus we follow the second way. In this paper we present a novel class of neural spatial interaction models that incorporate origin-specific constraints into the model structure using product units rather than summation units at the hidden layer and softmax output units at the output layer. Product unit neural networks are powerful because of their ability to handle higher order combinations of inputs. But parameter estimation by standard techniques such as the gradient descent technique may be difficult. The performance of this novel class of spatial interaction models will be demonstrated by using the Austrian interregional traffic data and the conventional singly constrained spatial interaction model of the gravity type as benchmark. References Fischer M M (1998) Computational neural networks: A new paradigm for spatial analysis Environment and Planning A 30 (10): 1873-1891 Fischer M M, Gopal S (1994) Artificial neural networks: A new approach to modelling interregional telecommunciation flows, Journal of Regional Science 34(4): 503-527 Openshaw S (1993) Modelling spatial interaction using a neural net. In Fischer MM, Nijkamp P (eds) Geographical information systems, spatial modelling, and policy evaluation, pp. 147-164. Springer, Berlin

    Determination of the CMSSM Parameters using Neural Networks

    Full text link
    In most (weakly interacting) extensions of the Standard Model the relation mapping the parameter values onto experimentally measurable quantities can be computed (with some uncertainties), but the inverse relation is usually not known. In this paper we demonstrate the ability of artificial neural networks to find this unknown relation, by determining the unknown parameters of the constrained minimal supersymmetric extension of the Standard Model (CMSSM) from quantities that can be measured at the LHC. We expect that the method works also for many other new physics models. We compare its performance with the results of a straightforward \chi^2 minimization. We simulate LHC signals at a center of mass energy of 14 TeV at the hadron level. In this proof-of-concept study we do not explicitly simulate Standard Model backgrounds, but apply cuts that have been shown to enhance the signal-to-background ratio. We analyze four different benchmark points that lie just beyond current lower limits on superparticle masses, each of which leads to around 1000 events after cuts for an integrated luminosity of 10 fb^{-1}. We use up to 84 observables, most of which are counting observables; we do not attempt to directly reconstruct (differences of) masses from kinematic edges or kinks of distributions. We nevertheless find that m_0 and m_{1/2} can be determined reliably, with errors as small as 1% in some cases. With 500 fb^{-1} of data tan\beta as well as A_0 can also be determined quite accurately. For comparable computational effort the \chi^2 minimization yielded much worse results.Comment: 46 pages, 10 figures, 4 tables; added short paragraph in Section 5 about the goodness of the fit, version to appear in Phys. Rev.
    • …
    corecore