11,435 research outputs found
Conscripting Private Resources to Meet Urban Needs: The Statutory and Constitutional Validity of Affordable Housing Impact Fees in New York
In the closing decade of the 20th century, American cities face difficult financial predicaments. Urban tax bases have atrophied, and the confidence rating of municipal bonds has been downgraded. At the same time, city expenditures have increased as century-old infrastructure begins to crumble and urban demographics demand an ever increasing array of public services. To meet these challenges, New York City would do well to adopt impact fee and linkage arrangements, which would require developers to contribute to State coffers in proportion to the expected environmental, social, and economic impact of their development projects. To pass constitutional muster, however, any impact fee arrangement would have to be carefully worded to require impact fee payments towards the development of affordable housing for only those projects that can be demonstrably shown to have an adverse expected impact on the availability of affordable housing in the city
Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
This paper studies the multiplicity-correction effect of standard Bayesian
variable-selection priors in linear regression. Our first goal is to clarify
when, and how, multiplicity correction happens automatically in Bayesian
analysis, and to distinguish this correction from the Bayesian Ockham's-razor
effect. Our second goal is to contrast empirical-Bayes and fully Bayesian
approaches to variable selection through examples, theoretical results and
simulations. Considerable differences between the two approaches are found. In
particular, we prove a theorem that characterizes a surprising aymptotic
discrepancy between fully Bayes and empirical Bayes. This discrepancy arises
from a different source than the failure to account for hyperparameter
uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when
the empirical-Bayes estimate converges asymptotically to the true
variable-inclusion probability, the potential for a serious difference remains.Comment: Published in at http://dx.doi.org/10.1214/10-AOS792 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
The magnetar model for Type I superluminous supernovae I: Bayesian analysis of the full multicolour light curve sample with MOSFiT
We use the new Modular Open Source Fitter for Transients (MOSFiT) to model 38
hydrogen-poor superluminous supernovae (SLSNe). We fit their multicolour light
curves with a magnetar spin-down model and present the posterior distributions
of magnetar and ejecta parameters. The colour evolution can be well matched
with a simple absorbed blackbody. We find the following medians (1
ranges): spin period 2.4 ms (1.2-4 ms); magnetic field G
(0.2-1.8 G); ejecta mass 4.8 Msun (2.2-12.9 Msun); kinetic
energy erg (1.9-9.8 erg). This
significantly narrows the parameter space compared to our priors, showing that
although the model is flexible, the parameter space relevant to SLSNe is well
constrained by existing data. The requirement that the instantaneous engine
power is erg at the light curve peak necessitates either a large
rotational energy (P<2 ms), or more commonly that the spin-down and diffusion
timescales be well-matched. We find no evidence for separate populations of
fast- and slow-declining SLSNe, which instead form a continuum both in light
curve widths and inferred parameters. Variations in the spectra are well
explained through differences in spin-down power and photospheric radii at
maximum-light. We find no correlations between any model parameters and the
properties of SLSN host galaxies. Comparing our posteriors to stellar evolution
models, we show that SLSNe require rapidly rotating (fastest 10%) massive stars
(> 20 Msun), and that this is consistent with the observed SLSN rate. High
mass, low metallicity, and likely binary interaction all serve to maintain
rapid rotation essential for magnetar formation. By reproducing the full set of
SLSN light curves, our posteriors can be used to inform photometric searches
for SLSNe in future survey data
Training samples in objective Bayesian model selection
Central to several objective approaches to Bayesian model selection is the
use of training samples (subsets of the data), so as to allow utilization of
improper objective priors. The most common prescription for choosing training
samples is to choose them to be as small as possible, subject to yielding
proper posteriors; these are called minimal training samples.
When data can vary widely in terms of either information content or impact on
the improper priors, use of minimal training samples can be inadequate.
Important examples include certain cases of discrete data, the presence of
censored observations, and certain situations involving linear models and
explanatory variables. Such situations require more sophisticated methods of
choosing training samples. A variety of such methods are developed in this
paper, and successfully applied in challenging situations
Optimal predictive model selection
Often the goal of model selection is to choose a model for future prediction,
and it is natural to measure the accuracy of a future prediction by squared
error loss. Under the Bayesian approach, it is commonly perceived that the
optimal predictive model is the model with highest posterior probability, but
this is not necessarily the case. In this paper we show that, for selection
among normal linear models, the optimal predictive model is often the median
probability model, which is defined as the model consisting of those variables
which have overall posterior probability greater than or equal to 1/2 of being
in a model. The median probability model often differs from the highest
probability model
Posterior propriety and admissibility of hyperpriors in normal hierarchical models
Hierarchical modeling is wonderful and here to stay, but hyperparameter
priors are often chosen in a casual fashion. Unfortunately, as the number of
hyperparameters grows, the effects of casual choices can multiply, leading to
considerably inferior performance. As an extreme, but not uncommon, example use
of the wrong hyperparameter priors can even lead to impropriety of the
posterior. For exchangeable hierarchical multivariate normal models, we first
determine when a standard class of hierarchical priors results in proper or
improper posteriors. We next determine which elements of this class lead to
admissible estimators of the mean under quadratic loss; such considerations
provide one useful guideline for choice among hierarchical priors. Finally,
computational issues with the resulting posterior distributions are addressed.Comment: Published at http://dx.doi.org/10.1214/009053605000000075 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …