5,406 research outputs found
Nonuniform Coverage Control on the Line
This paper investigates control laws allowing mobile, autonomous agents to
optimally position themselves on the line for distributed sensing in a
nonuniform field. We show that a simple static control law, based only on local
measurements of the field by each agent, drives the agents close to the optimal
positions after the agents execute in parallel a number of
sensing/movement/computation rounds that is essentially quadratic in the number
of agents. Further, we exhibit a dynamic control law which, under slightly
stronger assumptions on the capabilities and knowledge of each agent, drives
the agents close to the optimal positions after the agents execute in parallel
a number of sensing/communication/computation/movement rounds that is
essentially linear in the number of agents. Crucially, both algorithms are
fully distributed and robust to unpredictable loss and addition of agents
A conservative approach to parallelizing the Sharks World simulation
Parallelizing a benchmark problem for parallel simulation, the Sharks World, is described. The described solution is conservative, in the sense that no state information is saved, and no 'rollbacks' occur. The used approach illustrates both the principal advantage and principal disadvantage of conservative parallel simulation. The advantage is that by exploiting lookahead an approach was found that dramatically improves the serial execution time, and also achieves excellent speedups. The disadvantage is that if the model rules are changed in such a way that the lookahead is destroyed, it is difficult to modify the solution to accommodate the changes
Conservative parallel simulation of priority class queueing networks
A conservative synchronization protocol is described for the parallel simulation of queueing networks having C job priority classes, where a job's class is fixed. This problem has long vexed designers of conservative synchronization protocols because of its seemingly poor ability to compute lookahead: the time of the next departure. For, a job in service having low priority can be preempted at any time by an arrival having higher priority and an arbitrarily small service time. The solution is to skew the event generation activity so that the events for higher priority jobs are generated farther ahead in simulated time than lower priority jobs. Thus, when a lower priority job enters service for the first time, all the higher priority jobs that may preempt it are already known and the job's departure time can be exactly predicted. Finally, the protocol was analyzed and it was demonstrated that good performance can be expected on the simulation of large queueing networks
Parallelizing Timed Petri Net simulations
The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized
Measurement-Adaptive Cellular Random Access Protocols
This work considers a single-cell random access channel (RACH) in cellular
wireless networks. Communications over RACH take place when users try to
connect to a base station during a handover or when establishing a new
connection. Within the framework of Self-Organizing Networks (SONs), the system
should self- adapt to dynamically changing environments (channel fading,
mobility, etc.) without human intervention. For the performance improvement of
the RACH procedure, we aim here at maximizing throughput or alternatively
minimizing the user dropping rate. In the context of SON, we propose protocols
which exploit information from measurements and user reports in order to
estimate current values of the system unknowns and broadcast global
action-related values to all users. The protocols suggest an optimal pair of
user actions (transmission power and back-off probability) found by minimizing
the drift of a certain function. Numerical results illustrate considerable
benefits of the dropping rate, at a very low or even zero cost in power
expenditure and delay, as well as the fast adaptability of the protocols to
environment changes. Although the proposed protocol is designed to minimize
primarily the amount of discarded users per cell, our framework allows for
other variations (power or delay minimization) as well.Comment: 31 pages, 13 figures, 3 tables. Springer Wireless Networks 201
- …