14,134 research outputs found
Control of Robotic Mobility-On-Demand Systems: a Queueing-Theoretical Perspective
In this paper we present and analyze a queueing-theoretical model for
autonomous mobility-on-demand (MOD) systems where robotic, self-driving
vehicles transport customers within an urban environment and rebalance
themselves to ensure acceptable quality of service throughout the entire
network. We cast an autonomous MOD system within a closed Jackson network model
with passenger loss. It is shown that an optimal rebalancing algorithm
minimizing the number of (autonomously) rebalancing vehicles and keeping
vehicles availabilities balanced throughout the network can be found by solving
a linear program. The theoretical insights are used to design a robust,
real-time rebalancing algorithm, which is applied to a case study of New York
City. The case study shows that the current taxi demand in Manhattan can be met
with about 8,000 robotic vehicles (roughly 60% of the size of the current taxi
fleet). Finally, we extend our queueing-theoretical setup to include congestion
effects, and we study the impact of autonomously rebalancing vehicles on
overall congestion. Collectively, this paper provides a rigorous approach to
the problem of system-wide coordination of autonomously driving vehicles, and
provides one of the first characterizations of the sustainability benefits of
robotic transportation networks.Comment: 10 pages, To appear at RSS 201
Real Time in Plan 9
We describe our experience with the implementation and use of a hard-real-time scheduler for use in Plan 9 as an embedded operating system
The Lock-free -LSM Relaxed Priority Queue
Priority queues are data structures which store keys in an ordered fashion to
allow efficient access to the minimal (maximal) key. Priority queues are
essential for many applications, e.g., Dijkstra's single-source shortest path
algorithm, branch-and-bound algorithms, and prioritized schedulers.
Efficient multiprocessor computing requires implementations of basic data
structures that can be used concurrently and scale to large numbers of threads
and cores. Lock-free data structures promise superior scalability by avoiding
blocking synchronization primitives, but the \emph{delete-min} operation is an
inherent scalability bottleneck in concurrent priority queues. Recent work has
focused on alleviating this obstacle either by batching operations, or by
relaxing the requirements to the \emph{delete-min} operation.
We present a new, lock-free priority queue that relaxes the \emph{delete-min}
operation so that it is allowed to delete \emph{any} of the smallest
keys, where is a runtime configurable parameter. Additionally, the
behavior is identical to a non-relaxed priority queue for items added and
removed by the same thread. The priority queue is built from a logarithmic
number of sorted arrays in a way similar to log-structured merge-trees. We
experimentally compare our priority queue to recent state-of-the-art lock-free
priority queues, both with relaxed and non-relaxed semantics, showing high
performance and good scalability of our approach.Comment: Short version as ACM PPoPP'15 poste
Single-Producer/Single-Consumer Queues on Shared Cache Multi-Core Systems
Using efficient point-to-point communication channels is critical for
implementing fine grained parallel program on modern shared cache multi-core
architectures.
This report discusses in detail several implementations of wait-free
Single-Producer/Single-Consumer queue (SPSC), and presents a novel and
efficient algorithm for the implementation of an unbounded wait-free SPSC queue
(uSPSC). The correctness proof of the new algorithm, and several performance
measurements based on simple synthetic benchmark and microbenchmark, are also
discussed
The Longest Queue Drop Policy for Shared-Memory Switches is 1.5-competitive
We consider the Longest Queue Drop memory management policy in shared-memory
switches consisting of output ports. The shared memory of size
may have an arbitrary number of input ports. Each packet may be admitted by any
incoming port, but must be destined to a specific output port and each output
port may be used by only one queue. The Longest Queue Drop policy is a natural
online strategy used in directing the packet flow in buffering problems.
According to this policy and assuming unit packet values and cost of
transmission, every incoming packet is accepted, whereas if the shared memory
becomes full, one or more packets belonging to the longest queue are preempted,
in order to make space for the newly arrived packets. It was proved in 2001
[Hahne et al., SPAA '01] that the Longest Queue Drop policy is 2-competitive
and at least -competitive. It remained an open question whether a
(2-\epsilon) upper bound for the competitive ratio of this policy could be
shown, for any positive constant \epsilon. We show that the Longest Queue Drop
online policy is 1.5-competitive
- …