8 research outputs found

    What are the errors we should be seeking to minimise in the estimation process?

    Get PDF
    Three methods of fitting a gravity model - a triproportional model - to an observed trip matrix are compared. The first is the familiar practical method of choosing row, column and cost factors so that the model has the same row, column and cost sums as the grossed-up data. The second method, a true maximum likelihood estimation, chooses the factors so that the sums of the observed counts (not grossed-up) are matched. This differs from the first method only when the sampling probabilities vary from cell to cell. The third method applies the more modern approach of selecting a loss function which represents the practical effect of differences between the model values and the true values, and then chooses the model factors so that the expected loss, as far as it can be determined fromthe sample data and any prior information, is minimised. Squared error in flow times travel time is proposed as the loss function. It is noted that there is a loss function whose use is equivalent to maximum likelihood. When the sample counts are large and the model fits well, each of the methods reduced to minimising the weighted squared, difference between the model and the saturated value. The variations in these weights show the differences between the three methods

    The Internal Validation of a National Model of Long Distance Traffic.

    Get PDF
    During 1980/81, the Department of Transport developed a model for describing the distribution of private vehicle trips between 642 districts in Great Britain, using data from household and roadside interviews conducted in 1976 for the Regional Highways Traffic Model, and a new formulation of the gravity model, called a composite approach, in which shorter length movements were described at a finer level of zonal detail than longer movements. This report describes the results of an independent validation exercise conducted for the Department, in which the theoretical basis of the model and its the quality of its fit to base year data were examined. The report discusses model specification; input data; calibration issues; and accuracy assessment. The main problems addressed included the treatment of intrazonal and terminal costs, which was thought to be deficient; the trip-end estimates to which the model was constrained, which were shown to have substantial variability and to be biassed (though the cause of the latter could be readily removed), with some evidence of geographical under-specification; and the differences between roadside and household interview estimates. The report includes a detailed examination of the composite model specification and contains suggestions for improving the way in which such models are fitted. The main technical developments, for both theory and practice, are the methods developed for assessing the accuracy of the fitted model and for examining the quality of its fit with respect to the observed data, taking account of the variances and covariances of modelled and data values. Overall, the broad conclusion was that, whilst there appeared to be broad compatibility between modelled and onserved data in observed cells, there was clear evidence of inadequacy in certain respects, such as for example underestimation of intradistr ict trips. This work was done in co-operation with Howard Humphreys and Partners and Transportation Planning Associates, who validated the model against independent external data; their work is reported separately

    Braess's paradox of traffic flow

    No full text

    The hardness of network design for unsplittable flow with selfish users

    No full text
    Abstract. In this paper we consider the network design for selfish users problem, where we assume the more realistic unsplittable model in which the users can have general demands and each user must choose a single path between its source and its destination. This model is also called atomic (weighted) network congestion game. The problem can be presented as follows: given a network, which edges should be removed to minimize the cost of the worst Nash equilibrium? We consider both computational issues and existential issues (i.e. the power of network design). We give inapproximability results and approximation algorithms for this network design problem. For networks with linear edge latency functions we prove that there is no approximation algorithm for this problem with approximation ratio less then (3+ √ 5)/2 ≈ 2.618 unless P = NP. We also show that for networks with polynomials of degree d edge latency functions there is no approximation algorithm for this problem with approximation ratio less then d Θ(d) unles

    Improved bounds and new trade-offs for dynamic all pairs shortest paths

    No full text
    Let G be a directed graph with n vertices, subject to dynamic updates, and such that each edge weight can assume at most S different arbitrary real values throughout the sequence of updates. We present a new algorithm for maintaining all pairs shortest paths in G in O(S0.5 · n2.5 log1.5 n) amortized time per update and in O(1) worst-case time per distance query. This improves over previous bounds.We also show how to obtain query/update trade-offs for this problem, by introducing two new families of algorithms. Algorithms in the first family achieve an update bound of Õ(S · κ · n2) 1 and a query bound of Õ(n/k), and improve over the best known update bounds for κ in the range (n/S)1/3 ≤ κ < (n/S)1/2. Algorithms in the second family achieve an update bound of Õ S · κ · n2 and a query bound of Õ(n2/κ2), and are competitive with the best known update bounds (first family included) for κ in the range (n/S)1/6 ≤ κ < (n/S) 1/3. © 2002 Springer-Verlag Berlin Heidelberg
    corecore