1,371 research outputs found

    Hollow Heaps

    Full text link
    We introduce the hollow heap, a very simple data structure with the same amortized efficiency as the classical Fibonacci heap. All heap operations except delete and delete-min take O(1)O(1) time, worst case as well as amortized; delete and delete-min take O(logn)O(\log n) amortized time on a heap of nn items. Hollow heaps are by far the simplest structure to achieve this. Hollow heaps combine two novel ideas: the use of lazy deletion and re-insertion to do decrease-key operations, and the use of a dag (directed acyclic graph) instead of a tree or set of trees to represent a heap. Lazy deletion produces hollow nodes (nodes without items), giving the data structure its name.Comment: 27 pages, 7 figures, preliminary version appeared in ICALP 201

    Long-term impacts of disturbance on nitrogen-cycling bacteria in a New England salt marsh

    Get PDF
    Recent studies on the impacts of disturbance on microbial communities indicate communities show differential responses to disturbance, yet our understanding of how different microbial communities may respond to and recover from disturbance is still rudimentary. We investigated impacts of tidal restriction followed by tidal restoration on abundance and diversity of denitrifying bacteria, ammonia-oxidizing bacteria (AOB), and ammonia-oxidizing archaea (AOA) in New England salt marshes by analyzing nirS and bacterial and archaeal amoA genes, respectively. TRFLP analysis of nirS and betaproteobacterial amoA genes revealed significant differences between restored and undisturbed marshes, with the greatest differences detected in deeper sediments. Additionally, community patterns indicated a potential recovery trajectory for denitrifiers. Analysis of archaeal amoA genes, however, revealed no differences in community composition between restored and undisturbed marshes, but we detected significantly higher gene abundance in deeper sediment at restored sites. Abundances of nirS and betaproteobacterial amoA genes were also significantly greater in deeper sediments at restored sites. Porewater ammonium was significantly higher at depth in restored sediments compared to undisturbed sediments, suggesting a possible mechanism driving some of the community differences. Our results suggest that impacts of disturbance on denitrifying and ammonia-oxidizing communities remain nearly 30 years after restoration, potentially impacting nitrogen-cycling processes in the marsh. We also present data suggesting that sampling deeper in sediments may be critical for detecting disturbance effects in coastal sediments

    Optimal resizable arrays

    Full text link
    A \emph{resizable array} is an array that can \emph{grow} and \emph{shrink} by the addition or removal of items from its end, or both its ends, while still supporting constant-time \emph{access} to each item stored in the array given its \emph{index}. Since the size of an array, i.e., the number of items in it, varies over time, space-efficient maintenance of a resizable array requires dynamic memory management. A standard doubling technique allows the maintenance of an array of size~NN using only O(N)O(N) space, with O(1)O(1) amortized time, or even O(1)O(1) worst-case time, per operation. Sitarski and Brodnik et al.\ describe much better solutions that maintain a resizable array of size~NN using only N+O(N)N+O(\sqrt{N}) space, still with O(1)O(1) time per operation. Brodnik et al.\ give a simple proof that this is best possible. We distinguish between the space needed for \emph{storing} a resizable array, and accessing its items, and the \emph{temporary} space that may be needed while growing or shrinking the array. For every integer r2r\ge 2, we show that N+O(N1/r)N+O(N^{1/r}) space is sufficient for storing and accessing an array of size~NN, if N+O(N11/r)N+O(N^{1-1/r}) space can be used briefly during grow and shrink operations. Accessing an item by index takes O(1)O(1) worst-case time while grow and shrink operations take O(r)O(r) amortized time. Using an exact analysis of a \emph{growth game}, we show that for any data structure from a wide class of data structures that uses only N+O(N1/r)N+O(N^{1/r}) space to store the array, the amortized cost of grow is Ω(r)\Omega(r), even if only grow and access operations are allowed. The time for grow and shrink operations cannot be made worst-case, unless r=2r=2.Comment: To appear in SOSA 202

    Optimization of suppression for two-element treatment liners for turbomachinery exhaust ducts

    Get PDF
    Sound wave propagation in a soft-walled rectangular duct with steady uniform flow was investigated at exhaust conditions, incorporating the solution equations for sound wave propagation in a rectangular duct with multiple longitudinal wall treatment segments. Modal analysis was employed to find the solution equations and to study the effectiveness of a uniform and of a two-sectional liner in attenuating sound power in a treated rectangular duct without flow (M = 0) and with uniform flow of Mach 0.3. Two-segment liners were shown to increase the attenuation of sound as compared to a uniform liner. The predicted sound attenuation was compared with measured laboratory results for an optimized two-segment suppressor. Good correlation was obtained between the measured and predicted suppressions when practical variations in the modal content and impedance were taken into account. Two parametric studies were also completed

    Optimal energetic paths for electric cars

    Full text link
    A weighted directed graph G=(V,A,c)G=(V,A,c), where AV×VA\subseteq V\times V and c:ARc:A\to R, describes a road network in which an electric car can roam. An arc uvuv models a road segment connecting the two vertices uu and vv. The cost c(uv)c(uv) of an arc uvuv is the amount of energy the car needs to traverse the arc. This amount may be positive, zero or negative. To make the problem realistic, we assume there are no negative cycles. The car has a battery that can store up to BB units of energy. It can traverse an arc uvAuv\in A only if it is at uu and the charge bb in its battery satisfies bc(uv)b\ge c(uv). If it traverses the arc, it reaches vv with a charge of min(bc(uv),B)\min(b-c(uv),B). Arcs with positive costs deplete the battery, arcs with negative costs charge the battery, but not above its capacity of BB. Given s,tVs,t\in V, can the car travel from ss to tt, starting at ss with an initial charge bb, where 0bB0\le b\le B? If so, what is the maximum charge with which the car can reach tt? Equivalently, what is the smallest δB,b(s,t)\delta_{B,b}(s,t) such that the car can reach tt with a charge of bδB,b(s,t)b-\delta_{B,b}(s,t), and which path should the car follow to achieve this? We refer to δB,b(s,t)\delta_{B,b}(s,t) as the energetic cost of traveling from ss to tt. We let δB,b(s,t)=\delta_{B,b}(s,t)=\infty if the car cannot travel from ss to tt starting with an initial charge of bb. The problem of computing energetic costs is a strict generalization of the standard shortest paths problem. We show that the single-source minimum energetic paths problem can be solved using simple, but subtle, adaptations of the Bellman-Ford and Dijkstra algorithms. To make Dijkstra's algorithm work in the presence of negative arcs, but no negative cycles, we use a variant of the AA^* search heuristic. These results are explicit or implicit in some previous papers. We provide a simpler and unified description of these algorithms.Comment: 11 page

    Optimal Energetic Paths for Electric Cars

    Get PDF

    Synthetic Radar Dataset Generator for Macro-Gesture Recognition

    Get PDF
    Recent developments in mmWave technology allow the detection and classification of dynamic arm gestures. However, achieving a high accuracy and generalization requires a lot of samples for the training of a machine learning model. Furthermore, in order to capture variability in the gesture class, the participation of many subjects and the conduct of many gestures with different arm speed are required. In case of macro-gestures, the position of the subject must also vary inside the field of view of the device. This would require a significant amount of time and effort, which needs to be repeated in case that the sensor hardware or the modulation parameters are modified. In order to reduce the required manual effort, here we developed a synthetic data generator that is capable of simulating seven arm gestures by utilizing Blender, an open-source 3D creation suite. We used it to generate 600 artificial samples with varying speed of execution and relative position of the simulated subject, and used them to train a machine learning model. We tested the model using a real dataset recorded from ten subjects, using an experimental sensor. The test set yielded 84.2% accuracy, indicating that synthetic data generation can significantly contribute in the pre-training of a model

    The Impatient May Use Limited Optimism to Minimize Regret

    Full text link
    Discounted-sum games provide a formal model for the study of reinforcement learning, where the agent is enticed to get rewards early since later rewards are discounted. When the agent interacts with the environment, she may regret her actions, realizing that a previous choice was suboptimal given the behavior of the environment. The main contribution of this paper is a PSPACE algorithm for computing the minimum possible regret of a given game. To this end, several results of independent interest are shown. (1) We identify a class of regret-minimizing and admissible strategies that first assume that the environment is collaborating, then assume it is adversarial---the precise timing of the switch is key here. (2) Disregarding the computational cost of numerical analysis, we provide an NP algorithm that checks that the regret entailed by a given time-switching strategy exceeds a given value. (3) We show that determining whether a strategy minimizes regret is decidable in PSPACE
    corecore