791,239 research outputs found
Self-Selection Consistent Functions
This paper studies collective choice rules whose outcomes consist of a collection of simultaneous decisions, each one of which is the only concern of some group of individuals in society. The need for such rules arises in different contexts, including the establishment of jurisdictions, the location of multiple public facilities, or the election of representative committees. We define a notion of allocation consistency requiring that each partial aspect of the global decision taken by society as a whole should be ratified by the group of agents who are directly concerned with this particular aspect. We investigate the possibility of designing envy-free allocation consistent rules, we also explore whether such rules may also respect the Condorcet criterion.Consistency, Condorcet criterion
The Intermediate Scale MSSM, the Higgs Mass and F-theory Unification
Even if SUSY is not present at the Electro-Weak scale, string theory suggests
its presence at some scale M_{SS} below the string scale M_s to guarantee the
absence of tachyons. We explore the possible value of M_{SS} consistent with
gauge coupling unification and known sources of SUSY breaking in string theory.
Within F-theory SU(5) unification these two requirements fix M_{SS} ~ 5 x
10^{10} GeV at an intermediate scale and a unification scale M_c ~ 3 x 10^{14}
GeV. As a direct consequence one also predicts the vanishing of the quartic
Higgs SM self-coupling at M_{SS} ~10^{11} GeV. This is tantalizingly consistent
with recent LHC hints of a Higgs mass in the region 124-126 GeV. With such a
low unification scale M_c ~ 3 x 10^{14} GeV one may worry about too fast proton
decay via dimension 6 operators. However in the F-theory GUT context SU(5) is
broken to the SM via hypercharge flux. We show that this hypercharge flux
deforms the SM fermion wave functions leading to a suppression, avoiding in
this way the strong experimental proton decay constraints. In these
constructions there is generically an axion with a scale of size f_a ~
M_c/(4\pi)^2 ~ 10^{12} GeV which could solve the strong CP problem and provide
for the observed dark matter. The prize to pay for these attractive features is
to assume that the hierarchy problem is solved due to anthropic selection in a
string landscape.Comment: 48 pages, 8 figures. v3: further minor correction
Wavelet-based density estimation for noise reduction in plasma simulations using particles
For given computational resources, the accuracy of plasma simulations using
particles is mainly held back by the noise due to limited statistical sampling
in the reconstruction of the particle distribution function. A method based on
wavelet analysis is proposed and tested to reduce this noise. The method, known
as wavelet based density estimation (WBDE), was previously introduced in the
statistical literature to estimate probability densities given a finite number
of independent measurements. Its novel application to plasma simulations can be
viewed as a natural extension of the finite size particles (FSP) approach, with
the advantage of estimating more accurately distribution functions that have
localized sharp features. The proposed method preserves the moments of the
particle distribution function to a good level of accuracy, has no constraints
on the dimensionality of the system, does not require an a priori selection of
a global smoothing scale, and its able to adapt locally to the smoothness of
the density based on the given discrete particle data. Most importantly, the
computational cost of the denoising stage is of the same order as one time step
of a FSP simulation. The method is compared with a recently proposed proper
orthogonal decomposition based method, and it is tested with three particle
data sets that involve different levels of collisionality and interaction with
external and self-consistent fields
The Demographics of Broad Line Quasars in the Mass-Luminosity Plane II. Black Hole Mass and Eddington Ratio Functions
We employ a flexible Bayesian technique to estimate the black hole mass and
Eddington ratio functions for Type 1 (i.e., broad line) quasars from a
uniformly-selected data set of ~58,000 quasars from the SDSS DR7. We find that
the SDSS becomes significantly incomplete at M_{BH} < 3 x 10^8 M_{Sun} or L /
L_{Edd} < 0.07, and that the number densities of Type 1 quasars continue to
increase down to these limits. Both the mass and Eddington ratio functions show
evidence of downsizing, with the most massive and highest Eddington ratio black
holes experiencing Type 1 quasar phases first, although the Eddington ratio
number densities are flat at z < 2. We estimate the maximum Eddington ratio of
Type 1 quasars in the observable Universe to be L / L_{Edd} ~ 3. Consistent
with our results in Paper I, we do not find statistical evidence for a
so-called "sub-Eddington boundary" in the mass-luminosity plane of broad line
quasars, and demonstrate that such an apparent boundary in the observed
distribution can be caused by selection effect and errors in virial BH mass
estimates. Based on the typical Eddington ratio in a given mass bin, we
estimate typical growth times for the black holes in Type 1 quasars and find
that they are typically comparable to or longer than the age of the universe,
implying an earlier phase of accelerated (i.e., with higher Eddington ratios)
and possibly obscured growth. The large masses probed by our sample imply that
most of our black holes reside in what are locally early type galaxies, and we
interpret our results within the context of models of self-regulated black hole
growth.Comment: Submitted to ApJ, 25 pages (emulateapj), 15 figures; revised to match
accepted version with primary changes to the introduction and discussion,
replaced Fig 1
Model Selection with the Loss Rank Principle
A key issue in statistics and machine learning is to automatically select the
"right" model complexity, e.g., the number of neighbors to be averaged over in
k nearest neighbor (kNN) regression or the polynomial degree in regression with
polynomials. We suggest a novel principle - the Loss Rank Principle (LoRP) -
for model selection in regression and classification. It is based on the loss
rank, which counts how many other (fictitious) data would be fitted better.
LoRP selects the model that has minimal loss rank. Unlike most penalized
maximum likelihood variants (AIC, BIC, MDL), LoRP depends only on the
regression functions and the loss function. It works without a stochastic noise
model, and is directly applicable to any non-parametric regressor, like kNN.Comment: 31 LaTeX pages, 1 figur
Bose-Einstein distribution, condensation transition and multiple stationary states in multiloci evolution of diploid population
The mapping between genotype and phenotype is encoded in the complex web of
epistatic interaction between genetic loci. In this rugged fitness landscape,
recombination processes, which tend to increase variation in the population,
compete with selection processes that tend to reduce genetic variation. Here we
show that the Bose-Einstein distribution describe the multiple stationary
states of a diploid population under this multi-loci evolutionary dynamics.
Moreover, the evolutionary process might undergo an interesting condensation
phase transition in the universality class of a Bose-Einstein condensation when
a finite fraction of pairs of linked loci, is fixed into given allelic states.
Below this phase transition the genetic variation within a species is
significantly reduced and only maintained by the remaining polymorphic loci.Comment: (12 pages, 7 figures
- …