13,087 research outputs found
The Efficiency Consequences of Local Revenue Equalization: Tax Competition and Tax Distortions
This paper shows how a popular system of federal revenue equalization grants can limit tax competition among subnational governments, correct fiscal externalities, and increase government spending. Remarkably, an equalization grant can implement efficient policy choices by regional governments, regardless of a wide variety of differences in regional tax capacity, tastes for public spending, and population. Thus, compared to other corrective devices, equalization achieves “robust” implementation. If aggregate tax bases are elastic, however, equalization leads to excessive taxation. Efficiency can be achieved by a modified formula that equalizes a fraction of local revenue deficiencies equal to the fraction of taxes that are shifted backward to factor suppliers.tax competition, intergovernmental grants
Numerical calculation of three-point branched covers of the projective line
We exhibit a numerical method to compute three-point branched covers of the
complex projective line. We develop algorithms for working explicitly with
Fuchsian triangle groups and their finite index subgroups, and we use these
algorithms to compute power series expansions of modular forms on these groups.Comment: 58 pages, 24 figures; referee's comments incorporate
Quantum Computation by Adiabatic Evolution
We give a quantum algorithm for solving instances of the satisfiability
problem, based on adiabatic evolution. The evolution of the quantum state is
governed by a time-dependent Hamiltonian that interpolates between an initial
Hamiltonian, whose ground state is easy to construct, and a final Hamiltonian,
whose ground state encodes the satisfying assignment. To ensure that the system
evolves to the desired final ground state, the evolution time must be big
enough. The time required depends on the minimum energy difference between the
two lowest states of the interpolating Hamiltonian. We are unable to estimate
this gap in general. We give some special symmetric cases of the satisfiability
problem where the symmetry allows us to estimate the gap and we show that, in
these cases, our algorithm runs in polynomial time.Comment: 24 pages, 12 figures, LaTeX, amssymb,amsmath, BoxedEPS packages;
email to [email protected]
Error estimates for density-functional theory predictions of surface energy and work function
Density-functional theory (DFT) predictions of materials properties are becoming ever more widespread. With increased use comes the demand for estimates of the accuracy of DFT results. In view of the importance of reliable surface properties, this work calculates surface energies and work functions for a large and diverse test set of crystalline solids. They are compared to experimental values by performing a linear regression, which results in a measure of the predictable and material-specific error of the theoretical result. Two of the most prevalent functionals, the local density approximation (LDA) and the Perdew-Burke-Ernzerhof parametrization of the generalized gradient approximation (PBE-GGA), are evaluated and compared. Both LDA and GGA-PBE are found to yield accurate work functions with error bars below 0.3 eV, rivaling the experimental precision. LDA also provides satisfactory estimates for the surface energy with error bars smaller than 10%, but GGA-PBE significantly underestimates the surface energy for materials with a large correlation energy
A Rare, Late Complication after Automated Implantable Cardioverter-Defibrillator Placement
This article describes an interesting case of automated implantable cardioverter defibrillator (AICD) extrusion fifteen months after implantation. The case report is followed by a discussion of the causes and treatment of skin erosion following pacemaker/AICD insertion
Dimensionality Reduction for k-Means Clustering and Low Rank Approximation
We show how to approximate a data matrix with a much smaller
sketch that can be used to solve a general class of
constrained k-rank approximation problems to within error.
Importantly, this class of problems includes -means clustering and
unconstrained low rank approximation (i.e. principal component analysis). By
reducing data points to just dimensions, our methods generically
accelerate any exact, approximate, or heuristic algorithm for these ubiquitous
problems.
For -means dimensionality reduction, we provide relative
error results for many common sketching techniques, including random row
projection, column selection, and approximate SVD. For approximate principal
component analysis, we give a simple alternative to known algorithms that has
applications in the streaming setting. Additionally, we extend recent work on
column-based matrix reconstruction, giving column subsets that not only `cover'
a good subspace for \bv{A}, but can be used directly to compute this
subspace.
Finally, for -means clustering, we show how to achieve a
approximation by Johnson-Lindenstrauss projecting data points to just dimensions. This gives the first result that leverages the
specific structure of -means to achieve dimension independent of input size
and sublinear in
Astrometry.net: Blind astrometric calibration of arbitrary astronomical images
We have built a reliable and robust system that takes as input an
astronomical image, and returns as output the pointing, scale, and orientation
of that image (the astrometric calibration or WCS information). The system
requires no first guess, and works with the information in the image pixels
alone; that is, the problem is a generalization of the "lost in space" problem
in which nothing--not even the image scale--is known. After robust source
detection is performed in the input image, asterisms (sets of four or five
stars) are geometrically hashed and compared to pre-indexed hashes to generate
hypotheses about the astrometric calibration. A hypothesis is only accepted as
true if it passes a Bayesian decision theory test against a background
hypothesis. With indices built from the USNO-B Catalog and designed for
uniformity of coverage and redundancy, the success rate is 99.9% for
contemporary near-ultraviolet and visual imaging survey data, with no false
positives. The failure rate is consistent with the incompleteness of the USNO-B
Catalog; augmentation with indices built from the 2MASS Catalog brings the
completeness to 100% with no false positives. We are using this system to
generate consistent and standards-compliant meta-data for digital and digitized
imaging from plate repositories, automated observatories, individual scientific
investigators, and hobbyists. This is the first step in a program of making it
possible to trust calibration meta-data for astronomical data of arbitrary
provenance.Comment: submitted to A
- …
