1,371 research outputs found
Hollow Heaps
We introduce the hollow heap, a very simple data structure with the same
amortized efficiency as the classical Fibonacci heap. All heap operations
except delete and delete-min take time, worst case as well as amortized;
delete and delete-min take amortized time on a heap of items.
Hollow heaps are by far the simplest structure to achieve this. Hollow heaps
combine two novel ideas: the use of lazy deletion and re-insertion to do
decrease-key operations, and the use of a dag (directed acyclic graph) instead
of a tree or set of trees to represent a heap. Lazy deletion produces hollow
nodes (nodes without items), giving the data structure its name.Comment: 27 pages, 7 figures, preliminary version appeared in ICALP 201
Long-term impacts of disturbance on nitrogen-cycling bacteria in a New England salt marsh
Recent studies on the impacts of disturbance on microbial communities indicate communities show differential responses to disturbance, yet our understanding of how different microbial communities may respond to and recover from disturbance is still rudimentary. We investigated impacts of tidal restriction followed by tidal restoration on abundance and diversity of denitrifying bacteria, ammonia-oxidizing bacteria (AOB), and ammonia-oxidizing archaea (AOA) in New England salt marshes by analyzing nirS and bacterial and archaeal amoA genes, respectively. TRFLP analysis of nirS and betaproteobacterial amoA genes revealed significant differences between restored and undisturbed marshes, with the greatest differences detected in deeper sediments. Additionally, community patterns indicated a potential recovery trajectory for denitrifiers. Analysis of archaeal amoA genes, however, revealed no differences in community composition between restored and undisturbed marshes, but we detected significantly higher gene abundance in deeper sediment at restored sites. Abundances of nirS and betaproteobacterial amoA genes were also significantly greater in deeper sediments at restored sites. Porewater ammonium was significantly higher at depth in restored sediments compared to undisturbed sediments, suggesting a possible mechanism driving some of the community differences. Our results suggest that impacts of disturbance on denitrifying and ammonia-oxidizing communities remain nearly 30 years after restoration, potentially impacting nitrogen-cycling processes in the marsh. We also present data suggesting that sampling deeper in sediments may be critical for detecting disturbance effects in coastal sediments
Optimal resizable arrays
A \emph{resizable array} is an array that can \emph{grow} and \emph{shrink}
by the addition or removal of items from its end, or both its ends, while still
supporting constant-time \emph{access} to each item stored in the array given
its \emph{index}. Since the size of an array, i.e., the number of items in it,
varies over time, space-efficient maintenance of a resizable array requires
dynamic memory management. A standard doubling technique allows the maintenance
of an array of size~ using only space, with amortized time, or
even worst-case time, per operation. Sitarski and Brodnik et al.\
describe much better solutions that maintain a resizable array of size~
using only space, still with time per operation. Brodnik
et al.\ give a simple proof that this is best possible.
We distinguish between the space needed for \emph{storing} a resizable array,
and accessing its items, and the \emph{temporary} space that may be needed
while growing or shrinking the array. For every integer , we show that
space is sufficient for storing and accessing an array of
size~, if space can be used briefly during grow and shrink
operations. Accessing an item by index takes worst-case time while grow
and shrink operations take amortized time. Using an exact analysis of a
\emph{growth game}, we show that for any data structure from a wide class of
data structures that uses only space to store the array, the
amortized cost of grow is , even if only grow and access operations
are allowed. The time for grow and shrink operations cannot be made worst-case,
unless .Comment: To appear in SOSA 202
Optimization of suppression for two-element treatment liners for turbomachinery exhaust ducts
Sound wave propagation in a soft-walled rectangular duct with steady uniform flow was investigated at exhaust conditions, incorporating the solution equations for sound wave propagation in a rectangular duct with multiple longitudinal wall treatment segments. Modal analysis was employed to find the solution equations and to study the effectiveness of a uniform and of a two-sectional liner in attenuating sound power in a treated rectangular duct without flow (M = 0) and with uniform flow of Mach 0.3. Two-segment liners were shown to increase the attenuation of sound as compared to a uniform liner. The predicted sound attenuation was compared with measured laboratory results for an optimized two-segment suppressor. Good correlation was obtained between the measured and predicted suppressions when practical variations in the modal content and impedance were taken into account. Two parametric studies were also completed
Optimal energetic paths for electric cars
A weighted directed graph , where and
, describes a road network in which an electric car can roam. An arc
models a road segment connecting the two vertices and . The cost
of an arc is the amount of energy the car needs to traverse the
arc. This amount may be positive, zero or negative. To make the problem
realistic, we assume there are no negative cycles.
The car has a battery that can store up to units of energy. It can
traverse an arc only if it is at and the charge in its
battery satisfies . If it traverses the arc, it reaches with a
charge of . Arcs with positive costs deplete the battery, arcs
with negative costs charge the battery, but not above its capacity of .
Given , can the car travel from to , starting at with an
initial charge , where ? If so, what is the maximum charge with
which the car can reach ? Equivalently, what is the smallest
such that the car can reach with a charge of
, and which path should the car follow to achieve this? We
refer to as the energetic cost of traveling from to
. We let if the car cannot travel from to
starting with an initial charge of . The problem of computing energetic
costs is a strict generalization of the standard shortest paths problem.
We show that the single-source minimum energetic paths problem can be solved
using simple, but subtle, adaptations of the Bellman-Ford and Dijkstra
algorithms. To make Dijkstra's algorithm work in the presence of negative arcs,
but no negative cycles, we use a variant of the search heuristic. These
results are explicit or implicit in some previous papers. We provide a simpler
and unified description of these algorithms.Comment: 11 page
Synthetic Radar Dataset Generator for Macro-Gesture Recognition
Recent developments in mmWave technology allow the detection and classification of dynamic arm gestures. However, achieving a high accuracy and generalization requires a lot of samples for the training of a machine learning model. Furthermore, in order to capture variability in the gesture class, the participation of many subjects and the conduct of many gestures with different arm speed are required. In case of macro-gestures, the position of the subject must also vary inside the field of view of the device. This would require a significant amount of time and effort, which needs to be repeated in case that the sensor hardware or the modulation parameters are modified. In order to reduce the required manual effort, here we developed a synthetic data generator that is capable of simulating seven arm gestures by utilizing Blender, an open-source 3D creation suite. We used it to generate 600 artificial samples with varying speed of execution and relative position of the simulated subject, and used them to train a machine learning model. We tested the model using a real dataset recorded from ten subjects, using an experimental sensor. The test set yielded 84.2% accuracy, indicating that synthetic data generation can significantly contribute in the pre-training of a model
The Impatient May Use Limited Optimism to Minimize Regret
Discounted-sum games provide a formal model for the study of reinforcement
learning, where the agent is enticed to get rewards early since later rewards
are discounted. When the agent interacts with the environment, she may regret
her actions, realizing that a previous choice was suboptimal given the behavior
of the environment. The main contribution of this paper is a PSPACE algorithm
for computing the minimum possible regret of a given game. To this end, several
results of independent interest are shown. (1) We identify a class of
regret-minimizing and admissible strategies that first assume that the
environment is collaborating, then assume it is adversarial---the precise
timing of the switch is key here. (2) Disregarding the computational cost of
numerical analysis, we provide an NP algorithm that checks that the regret
entailed by a given time-switching strategy exceeds a given value. (3) We show
that determining whether a strategy minimizes regret is decidable in PSPACE
- …