674 research outputs found
A Static Optimality Transformation with Applications to Planar Point Location
Over the last decade, there have been several data structures that, given a
planar subdivision and a probability distribution over the plane, provide a way
for answering point location queries that is fine-tuned for the distribution.
All these methods suffer from the requirement that the query distribution must
be known in advance.
We present a new data structure for point location queries in planar
triangulations. Our structure is asymptotically as fast as the optimal
structures, but it requires no prior information about the queries. This is a
2D analogue of the jump from Knuth's optimum binary search trees (discovered in
1971) to the splay trees of Sleator and Tarjan in 1985. While the former need
to know the query distribution, the latter are statically optimal. This means
that we can adapt to the query sequence and achieve the same asymptotic
performance as an optimum static structure, without needing any additional
information.Comment: 13 pages, 1 figure, a preliminary version appeared at SoCG 201
Ratio-Balanced Maximum Flows
When a loan is approved for a person or company, the bank is subject to \emph{credit risk}; the risk that the lender defaults. To mitigate this risk, a bank will require some form of \emph{security}, which will be collected if the lender defaults. Accounts can be secured by several securities and a security can be used for several accounts. The goal is to fractionally assign the securities to the accounts so as to balance the risk. This situation can be modelled by a bipartite graph. We have a set of securities and a set of accounts. Each security has a \emph{value} and each account has an \emph{exposure} . If a security can be used to secure an account , we have an edge from to . Let be part of security 's value used to secure account . We are searching for a maximum flow that send at most units out of node and at most units into node . Then is the unsecured part of account . We are searching for the maximum flow that minimizes
The estimation of economic benefits of urban trees using contingent valuation method in Tasik Perdana, Kuala Lumpur
Urban trees provide a multitude of tangible and intangible services which include provisionary, regulatory, as well as cultural and support services to the community. Unfortunately, to set a monetary value on these said services is challenging to say the least. Ignorance of such monetary value is unintentional and this is mainly due to the lack of awareness and the absence of monetary value of the services itself. Hence, the quality of these urban trees degrades over time as no one appreciates its monetary value. In light of this situation, a study was initiated to determine the economic benefits of the urban trees that were planted surrounding Tasik Perdana (TP) area. For this purpose, a total of 313 respondents were interviewed in the TP area using the contingent valuation method (CVM). The objective of this study was to elicit willingness to pay (WTP) for these urban trees. WTP represents the willingness of a person to pay in monetary terms to secure and sustain these urban trees. Hence, seven bid prices were used and distributed to the respondents: RM1.00, RM5.00, RM10.00, RM15.00, RM20.00, RM25.00 and RM30.00. Logit and linear regression models were applied to predict the maximum, mean, and median WTP. The study concludes that the estimated mean WTP is RM10.32 per visit and the estimated median WTP is RM10.08 per visit
Parallel String Sample Sort
We discuss how string sorting algorithms can be parallelized on modern
multi-core shared memory machines. As a synthesis of the best sequential string
sorting algorithms and successful parallel sorting algorithms for atomic
objects, we propose string sample sort. The algorithm makes effective use of
the memory hierarchy, uses additional word level parallelism, and largely
avoids branch mispredictions. Additionally, we parallelize variants of multikey
quicksort and radix sort that are also useful in certain situations.Comment: 34 pages, 7 figures and 12 table
The Landscape of Bounds for Binary Search Trees
Binary search trees (BSTs) with rotations can adapt to various kinds of structure in search sequences, achieving amortized access times substantially better than the Theta(log n) worst-case guarantee. Classical examples of structural properties include static optimality, sequential access, working set, key-independent optimality, and dynamic finger, all of which are now known to be achieved by the two famous online BST algorithms (Splay and Greedy). (...) In this paper, we introduce novel properties that explain the efficiency of sequences not captured by any of the previously known properties, and which provide new barriers to the dynamic optimality conjecture. We also establish connections between various properties, old and new. For instance, we show the following. (i) A tight bound of O(n log d) on the cost of Greedy for d-decomposable sequences. The result builds on the recent lazy finger result of Iacono and Langerman (SODA 2016). On the other hand, we show that lazy finger alone cannot explain the efficiency of pattern avoiding sequences even in some of the simplest cases. (ii) A hierarchy of bounds using multiple lazy fingers, addressing a recent question of Iacono and Langerman. (iii) The optimality of the Move-to-root heuristic in the key-independent setting introduced by Iacono (Algorithmica 2005). (iv) A new tool that allows combining any finite number of sound structural properties. As an application, we show an upper bound on the cost of a class of sequences that all known properties fail to capture. (v) The equivalence between two families of BST properties. The observation on which this connection is based was known before - we make it explicit, and apply it to classical BST properties. (...
Dynamic deferred data structuring
Let S be a set of n reals. We show how to process on-line r membership queries, insertions, and deletions in time O(r log (n + r) + (n + r) log r). This is optimal in the binary comparison model
One-variable word equations in linear time
In this paper we consider word equations with one variable (and arbitrary
many appearances of it). A recent technique of recompression, which is
applicable to general word equations, is shown to be suitable also in this
case. While in general case it is non-deterministic, it determinises in case of
one variable and the obtained running time is O(n + #_X log n), where #_X is
the number of appearances of the variable in the equation. This matches the
previously-best algorithm due to D\k{a}browski and Plandowski. Then, using a
couple of heuristics as well as more detailed time analysis the running time is
lowered to O(n) in RAM model. Unfortunately no new properties of solutions are
shown.Comment: submitted to a journal, general overhaul over the previous versio
Azimuthal clumping instabilities in a ZZ-pinch wire array
A simple model is constructed to evaluate the temporal evolution of azimuthal clumping instabilities in a cylindrical array of current-carrying wires. An analytic scaling law is derived, which shows that randomly seeded perturbations evolve at the rate of the fastest unstable mode, almost from the start. This instability is entirely analogous to the Jeans instability in a self-gravitating disk, where the mutual attraction of gravity is replaced by the mutual attraction among the current-carrying wires.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/87765/2/052701_1.pd
Hot dense capsule implosion cores produced by z-pinch dynamic hohlraum radiation
Hot dense capsule implosions driven by z-pinch x-rays have been measured for
the first time. A ~220 eV dynamic hohlraum imploded 1.7-2.1 mm diameter
gas-filled CH capsules which absorbed up to ~20 kJ of x-rays. Argon tracer atom
spectra were used to measure the Te~ 1keV electron temperature and the ne ~ 1-4
x10^23 cm-3 electron density. Spectra from multiple directions provide core
symmetry estimates. Computer simulations agree well with the peak compression
values of Te, ne, and symmetry, indicating reasonable understanding of the
hohlraum and implosion physics.Comment: submitted to Phys. Rev. Let
Improving the Price of Anarchy for Selfish Routing via Coordination Mechanisms
We reconsider the well-studied Selfish Routing game with affine latency
functions. The Price of Anarchy for this class of games takes maximum value
4/3; this maximum is attained already for a simple network of two parallel
links, known as Pigou's network. We improve upon the value 4/3 by means of
Coordination Mechanisms.
We increase the latency functions of the edges in the network, i.e., if
is the latency function of an edge , we replace it by
with for all . Then an
adversary fixes a demand rate as input. The engineered Price of Anarchy of the
mechanism is defined as the worst-case ratio of the Nash social cost in the
modified network over the optimal social cost in the original network.
Formally, if \CM(r) denotes the cost of the worst Nash flow in the modified
network for rate and \Copt(r) denotes the cost of the optimal flow in the
original network for the same rate then [\ePoA = \max_{r \ge 0}
\frac{\CM(r)}{\Copt(r)}.]
We first exhibit a simple coordination mechanism that achieves for any
network of parallel links an engineered Price of Anarchy strictly less than
4/3. For the case of two parallel links our basic mechanism gives 5/4 = 1.25.
Then, for the case of two parallel links, we describe an optimal mechanism; its
engineered Price of Anarchy lies between 1.191 and 1.192.Comment: 17 pages, 2 figures, preliminary version appeared at ESA 201
- …