331,064 research outputs found
Time Versus Cost Tradeoffs for Deterministic Rendezvous in Networks
Two mobile agents, starting from different nodes of a network at possibly
different times, have to meet at the same node. This problem is known as
. Agents move in synchronous rounds. Each agent has a
distinct integer label from the set . Two main efficiency
measures of rendezvous are its (the number of rounds until the
meeting) and its (the total number of edge traversals). We
investigate tradeoffs between these two measures. A natural benchmark for both
time and cost of rendezvous in a network is the number of edge traversals
needed for visiting all nodes of the network, called the exploration time.
Hence we express the time and cost of rendezvous as functions of an upper bound
on the time of exploration (where and a corresponding exploration
procedure are known to both agents) and of the size of the label space. We
present two natural rendezvous algorithms. Algorithm has cost
(and, in fact, a version of this algorithm for the model where the
agents start simultaneously has cost exactly ) and time . Algorithm
has both time and cost . Our main contributions are
lower bounds showing that, perhaps surprisingly, these two algorithms capture
the tradeoffs between time and cost of rendezvous almost tightly. We show that
any deterministic rendezvous algorithm of cost asymptotically (i.e., of
cost ) must have time . On the other hand, we show that any
deterministic rendezvous algorithm with time complexity must have
cost
Omniscopes: Large Area Telescope Arrays with only N log N Computational Cost
We show that the class of antenna layouts for telescope arrays allowing cheap
analysis hardware (with correlator cost scaling as N log N rather than N^2 with
the number of antennas N) is encouragingly large, including not only previously
discussed rectangular grids but also arbitrary hierarchies of such grids, with
arbitrary rotations and shears at each level. We show that all correlations for
such a 2D array with an n-level hierarchy can be efficiently computed via a
Fast Fourier Transform in not 2 but 2n dimensions. This can allow major
correlator cost reductions for science applications requiring exquisite
sensitivity at widely separated angular scales, for example 21cm tomography
(where short baselines are needed to probe the cosmological signal and long
baselines are needed for point source removal), helping enable future 21cm
experiments with thousands or millions of cheap dipole-like antennas. Such
hierarchical grids combine the angular resolution advantage of traditional
array layouts with the cost advantage of a rectangular Fast Fourier Transform
Telescope. We also describe an algorithm for how a subclass of hierarchical
arrays can efficiently use rotation synthesis to produce global sky maps with
minimal noise and a well-characterized synthesized beam.Comment: Replaced to match accepted PRD version. 10 pages, 9 fig
Recommended from our members
Ideas for the Arcturus personal workstation
In order to achieve more effective use of interpersonal collaboration, management lifecycle activities, and dynamic retrieval of as well as for ordinary programming chores, it may be useful to take a fresh look at the devices we use to help us conduct our transactions with computers.This paper presents some new ideas for using large flatscreen displays and dedicated computers as personal workstations
Fingerprint databases for theorems
We discuss the advantages of searchable, collaborative, language-independent
databases of mathematical results, indexed by "fingerprints" of small and
canonical data. Our motivating example is Neil Sloane's massively influential
On-Line Encyclopedia of Integer Sequences. We hope to encourage the greater
mathematical community to search for the appropriate fingerprints within each
discipline, and to compile fingerprint databases of results wherever possible.
The benefits of these databases are broad - advancing the state of knowledge,
enhancing experimental mathematics, enabling researchers to discover unexpected
connections between areas, and even improving the refereeing process for
journal publication.Comment: to appear in Notices of the AM
In Defence of No Best World
Recent work in the philosophy of religion has resurrected Leibniz’s idea that there is a best possible world, perhaps ours. In particular, Klaas Kraay’s [2010] construction of a theistic multiverse and Nevin Climenhaga’s [2018] argument from infinite value theory are novel defenses of a best possible world. I do not think that there is a best world, and show how both Kraay and Climenhaga may be resisted. First, I argue that Kraay’s construction of a theistic multiverse can be resisted from plausible assumptions about set theory. Next, I argue against the value-theoretic assumptions that underlie Climenhaga’s argument and show how to give an infinite value theory where there is no best world
Recommended from our members
Policy Implications of Stochastic Learning Using a Modified PAGE2002 Model
We consider the importance of Endogenous Technical Change (ETC) on the risk profiles for different abatement strategies using the PAGE2002 model with ETC. Three outcomes from this modelling research have significant impacts on the way we ‘optimise’ the greenhouse gas abatement path. Firstly, it was found that for most standard abatement paths there would be an initial "learning investment" required that would substantially reduce the unit costs of CO2 abatement as compared to a business as usual scenario. Secondly, optimising an abatement program where ETC has been included can lead to an increased risk profile during the time of widespread CO2 abatements due to the costs associated with learning. Finally, the inclusion of ETC leads to a slightly deferred optimised abatement path followed by a drastic abatement program that itself would seem highly impractical. Together, the results draw attention to the possibilities of uncovering uncertainty through proactive abatements
Beyond asking : exploring the use of automatic price evaluations to implicitly estimate consumers’ willingness-to-pay
Explicit consumers responses are often adverse for the validity of procedures used to estimate consumers' willingness-to-pay (WTP). This paper investigates if price evaluations occur automatically and to what extent these automatic processes can be used to implicitly estimate consumers' WTP. An adapted version of the task-rule congruency (TRC) paradigm was used in two studies. Results of the first study provided evidence for the notion that prices are automatically evaluated. However, the used procedure had limitations that restricted its utility as an implicit WTP estimate. The procedure was adjusted, and an additional study was conducted. The results of the second study also indicated that prices were evaluated automatically. Additionally, the procedure used during the second study allowed to explore to what extent the observed TRC effects could be used to implicitly estimate consumers' WTP. Taken together, these studies provided evidence for the notion that prices are evaluated automatically. Furthermore, the procedure has the potential to be further developed into an implicit estimate of consumers' WTP
- …