840 research outputs found
APPROXIMATING THE BUFFER ALLOCATION PROBLEM USING EPOCHS
ABSTRACT. The correctness of applications that perform asynchronous message passing typically relies on the underlying hardware having a sufficient amount of memory (message buffers) to hold all undelivered messages-such applications may deadlock when executed on a system with an insufficient number of message buffers. Thus, determining the minimum number of buffers that an application needs to prevent deadlock is an important task when writing or porting parallel applications. Unfortunately, both this problem (called the Buffer Allocation Problem) and the simpler problem of determining whether an application may deadlock for a given number of available message buffers are intractable We present a new epoch-based polynomial-time approach for approximating the Buffer Allocation Problem. Our approach partitions application executions into epochs and intersperses barrier synchronizations between them, thus limiting the number of message buffers necessary to ensure deadlock-freedom. This approach produces near optimal solutions for many common cases and can be adapted to guide application modifications that ensure deadlockfreedom when the application is ported. Lastly, we describe a space-time trade-off between the number of available message buffers and the number of barrier synchronizations, and describe how this trade-off can be used to fine-tune application performance
The Computational Power of Optimization in Online Learning
We consider the fundamental problem of prediction with expert advice where
the experts are "optimizable": there is a black-box optimization oracle that
can be used to compute, in constant time, the leading expert in retrospect at
any point in time. In this setting, we give a novel online algorithm that
attains vanishing regret with respect to experts in total
computation time. We also give a lower bound showing
that this running time cannot be improved (up to log factors) in the oracle
model, thereby exhibiting a quadratic speedup as compared to the standard,
oracle-free setting where the required time for vanishing regret is
. These results demonstrate an exponential gap between
the power of optimization in online learning and its power in statistical
learning: in the latter, an optimization oracle---i.e., an efficient empirical
risk minimizer---allows to learn a finite hypothesis class of size in time
. We also study the implications of our results to learning in
repeated zero-sum games, in a setting where the players have access to oracles
that compute, in constant time, their best-response to any mixed strategy of
their opponent. We show that the runtime required for approximating the minimax
value of the game in this setting is , yielding
again a quadratic improvement upon the oracle-free setting, where
is known to be tight
Buffer allocation in message passing systems: An implementation for Mpi
Message passing applications that perform asynchronous communication need sufficient buffer space to hold all undelivered messages, or else the applications may deadlock. Determining the minimum amount of buffer space an application needs is called the Buffer Allocation Problem, and has been shown to be intractable [BPW]. However, an epoch based polynomial-time algorithm that approximates the Buffer Allocation Problem has been proposed by Pedersen et al. [PBS]. The algorithm partitions application executions into epochs and intersperses barrier synchronizations between them, thus limiting the number of message buffers necessary to ensure deadlock-freedom; In this thesis, we describe an implementation of the epoch based algorithm. Our implementation analyzes and performs barrier synchronizations for MPI (Message Passing Interface) applications. We use a modified version of MPI to gather information about the messages sent during the execution, and then use a standalone Java program to analyze the protocol (communication structure) and build a graph which serves as the foundation for the computation of barrier synchronizations. We then pass this information to MPI, making it available for automatic barrier synchronization. Finally, we present the results of an empirical study of various applications implemented to test our approximation algorithm
The Stochastic Dynamic Post-Disaster Inventory Allocation Problem with Trucks and UAVs
Humanitarian logistics operations face increasing difficulties due to rising
demands for aid in disaster areas. This paper investigates the dynamic
allocation of scarce relief supplies across multiple affected districts over
time. It introduces a novel stochastic dynamic post-disaster inventory
allocation problem with trucks and unmanned aerial vehicles delivering relief
goods under uncertain supply and demand. The relevance of this humanitarian
logistics problem lies in the importance of considering the inter-temporal
social impact of deliveries. We achieve this by incorporating deprivation costs
when allocating scarce supplies. Furthermore, we consider the inherent
uncertainties of disaster areas and the potential use of cargo UAVs to enhance
operational efficiency. This study proposes two anticipatory solution methods
based on approximate dynamic programming, specifically decomposed linear value
function approximation and neural network value function approximation to
effectively manage uncertainties in the dynamic allocation process. We compare
DL-VFA and NN-VFA with various state-of-the-art methods (exact re-optimization,
PPO) and results show a 6-8% improvement compared to the best benchmarks.
NN-VFA provides the best performance and captures nonlinearities in the
problem, whereas DL-VFA shows excellent scalability against a minor performance
loss. The experiments reveal that consideration of deprivation costs results in
improved allocation of scarce supplies both across affected districts and over
time. Finally, results show that deploying UAVs can play a crucial role in the
allocation of relief goods, especially in the first stages after a disaster.
The use of UAVs reduces transportation- and deprivation costs together by
16-20% and reduces maximum deprivation times by 19-40%, while maintaining
similar levels of demand coverage, showcasing efficient and effective
operations
- …