211,216 research outputs found

    Using the general link transmission model in a dynamic traffic assignment to simulate congestion on urban networks

    Get PDF
    This article presents two new models of Dynamic User Equilibrium that are particularly suited for ITS applications, where the evolution of vehicle flows and travel times must be simulated on large road networks, possibly in real-time. The key feature of the proposed models is the detail representation of the main congestion phenomena occurring at nodes of urban networks, such as vehicle queues and their spillback, as well as flow conflicts in mergins and diversions. Compared to the simple word of static assignment, where only the congestion along the arc is typically reproduced through a separable relation between vehicle flow and travel time, this type of DTA models are much more complex, as the above relation becomes non-separable, both in time and space. Traffic simulation is here attained through a macroscopic flow model, that extends the theory of kinematic waves to urban networks and non-linear fundamental diagrams: the General Link Transmission Model. The sub-models of the GLTM, namely the Node Intersection Model, the Forward Propagation Model of vehicles and the Backward Propagation Model of spaces, can be combined in two different ways to produce arc travel times starting from turn flows. The first approach is to consider short time intervals of a few seconds and process all nodes for each temporal layer in chronological order. The second approach allows to consider long time intervals of a few minutes and for each sub-model requires to process the whole temporal profile of involved variables. The two resulting DTA models are here analyzed and compared with the aim of identifying their possible use cases. A rigorous mathematical formulation is out of the scope of this paper, as well as a detailed explanation of the solution algorithm. The dynamic equilibrium is anyhow sought through a new method based on Gradient Projection, which is capable to solve both proposed models with any desired precision in a reasonable number of iterations. Its fast convergence is essential to show that the two proposed models for network congestion actually converge at equilibrium to nearly identical solutions in terms of arc flows and travel times, despite their two diametrical approaches wrt the dynamic nature of the problem, as shown in the numerical tests presented here

    Control Aware Radio Resource Allocation in Low Latency Wireless Control Systems

    Full text link
    We consider the problem of allocating radio resources over wireless communication links to control a series of independent wireless control systems. Low-latency transmissions are necessary in enabling time-sensitive control systems to operate over wireless links with high reliability. Achieving fast data rates over wireless links thus comes at the cost of reliability in the form of high packet error rates compared to wired links due to channel noise and interference. However, the effect of the communication link errors on the control system performance depends dynamically on the control system state. We propose a novel control-communication co-design approach to the low-latency resource allocation problem. We incorporate control and channel state information to make scheduling decisions over time on frequency, bandwidth and data rates across the next-generation Wi-Fi based wireless communication links that close the control loops. Control systems that are closer to instability or further from a desired range in a given control cycle are given higher packet delivery rate targets to meet. Rather than a simple priority ranking, we derive precise packet error rate targets for each system needed to satisfy stability targets and make scheduling decisions to meet such targets while reducing total transmission time. The resulting Control-Aware Low Latency Scheduling (CALLS) method is tested in numerous simulation experiments that demonstrate its effectiveness in meeting control-based goals under tight latency constraints relative to control-agnostic scheduling

    Restricted Strip Covering and the Sensor Cover Problem

    Full text link
    Given a set of objects with durations (jobs) that cover a base region, can we schedule the jobs to maximize the duration the original region remains covered? We call this problem the sensor cover problem. This problem arises in the context of covering a region with sensors. For example, suppose you wish to monitor activity along a fence by sensors placed at various fixed locations. Each sensor has a range and limited battery life. The problem is to schedule when to turn on the sensors so that the fence is fully monitored for as long as possible. This one dimensional problem involves intervals on the real line. Associating a duration to each yields a set of rectangles in space and time, each specified by a pair of fixed horizontal endpoints and a height. The objective is to assign a position to each rectangle to maximize the height at which the spanning interval is fully covered. We call this one dimensional problem restricted strip covering. If we replace the covering constraint by a packing constraint, the problem is identical to dynamic storage allocation, a scheduling problem that is a restricted case of the strip packing problem. We show that the restricted strip covering problem is NP-hard and present an O(log log n)-approximation algorithm. We present better approximations or exact algorithms for some special cases. For the uniform-duration case of restricted strip covering we give a polynomial-time, exact algorithm but prove that the uniform-duration case for higher-dimensional regions is NP-hard. Finally, we consider regions that are arbitrary sets, and we present an O(log n)-approximation algorithm.Comment: 14 pages, 6 figure

    Changing a semantics: opportunism or courage?

    Full text link
    The generalized models for higher-order logics introduced by Leon Henkin, and their multiple offspring over the years, have become a standard tool in many areas of logic. Even so, discussion has persisted about their technical status, and perhaps even their conceptual legitimacy. This paper gives a systematic view of generalized model techniques, discusses what they mean in mathematical and philosophical terms, and presents a few technical themes and results about their role in algebraic representation, calibrating provability, lowering complexity, understanding fixed-point logics, and achieving set-theoretic absoluteness. We also show how thinking about Henkin's approach to semantics of logical systems in this generality can yield new results, dispelling the impression of adhocness. This paper is dedicated to Leon Henkin, a deep logician who has changed the way we all work, while also being an always open, modest, and encouraging colleague and friend.Comment: 27 pages. To appear in: The life and work of Leon Henkin: Essays on his contributions (Studies in Universal Logic) eds: Manzano, M., Sain, I. and Alonso, E., 201
    • …
    corecore