36,072,419 research outputs found
A Novel Antenna Selection Scheme for Spatially Correlated Massive MIMO Uplinks with Imperfect Channel Estimation
We propose a new antenna selection scheme for a massive MIMO system with a
single user terminal and a base station with a large number of antennas. We
consider a practical scenario where there is a realistic correlation among the
antennas and imperfect channel estimation at the receiver side. The proposed
scheme exploits the sparsity of the channel matrix for the effective selection
of a limited number of antennas. To this end, we compute a sparse channel
matrix by minimising the mean squared error. This optimisation problem is then
solved by the well-known orthogonal matching pursuit algorithm. Widely used
models for spatial correlation among the antennas and channel estimation errors
are considered in this work. Simulation results demonstrate that when the
impacts of spatial correlation and imperfect channel estimation introduced, the
proposed scheme in the paper can significantly reduce complexity of the
receiver, without degrading the system performance compared to the maximum
ratio combining.Comment: in Proc. IEEE 81st Vehicular Technology Conference (VTC), May 2015, 6
pages, 5 figure
Deformation of the Fermi surface in the extended Hubbard model
The deformation of the Fermi surface induced by Coulomb interactions is
investigated in the t-t'-Hubbard model. The interplay of the local U and
extended V interactions is analyzed. It is found that exchange interactions V
enhance small anisotropies producing deformations of the Fermi surface which
break the point group symmetry of the square lattice at the Van Hove filling.
This Pomeranchuck instability competes with ferromagnetism and is suppressed at
a critical value of U(V). The interaction V renormalizes the t' parameter to
smaller values what favours nesting. It also induces changes on the topology of
the Fermi surface which can go from hole to electron-like what may explain
recent ARPES experiments.Comment: 5 pages, 4 ps figure
Cognitively-inspired Agent-based Service Composition for Mobile & Pervasive Computing
Automatic service composition in mobile and pervasive computing faces many
challenges due to the complex and highly dynamic nature of the environment.
Common approaches consider service composition as a decision problem whose
solution is usually addressed from optimization perspectives which are not
feasible in practice due to the intractability of the problem, limited
computational resources of smart devices, service host's mobility, and time
constraints to tailor composition plans. Thus, our main contribution is the
development of a cognitively-inspired agent-based service composition model
focused on bounded rationality rather than optimality, which allows the system
to compensate for limited resources by selectively filtering out continuous
streams of data. Our approach exhibits features such as distributedness,
modularity, emergent global functionality, and robustness, which endow it with
capabilities to perform decentralized service composition by orchestrating
manifold service providers and conflicting goals from multiple users. The
evaluation of our approach shows promising results when compared against
state-of-the-art service composition models.Comment: This paper will appear on AIMS'19 (International Conference on
Artificial Intelligence and Mobile Services) on June 2
Simulation study of the two-dimensional Burridge-Knopoff model of earthquakes
Spatiotemporal correlations of the two-dimensional spring-block
(Burridge-Knopoff) model of earthquakes are extensively studied by means of
numerical computer simulations. The model is found to exhibit either
``subcritical'' or ``supercritical'' behavior, depending on the values of the
model parameters. Transition between these regimes is either continuous or
discontinuous. Seismic events in the ``subcritical'' regime and those in the
``supercritical'' regime at larger magnitudes exhibit universal scaling
properties. In the ``supercritical'' regime, eminent spatiotemporal
correlations, {\it e.g.}, remarkable growth of seismic activity preceding the
mainshock, arise in earthquake occurrence, whereas such spatiotemporal
correlations are significantly suppressed in the ``subcritical'' regime.
Seismic activity is generically suppressed just before the mainshock in a close
vicinity of the epicenter of the upcoming event while it remains to be active
in the surroundings (the Mogi doughnut). It is also observed that, before and
after the mainshock, the apparent -value of the magnitude distribution
decreases or increases in the ``supercritical'' or ``subcritical'' regimes,
respectively. Such distinct precursory phenomena may open a way to the
prediction of the upcoming large event
Doubly Robust Inference when Combining Probability and Non-probability Samples with High-dimensional Data
Non-probability samples become increasingly popular in survey statistics but
may suffer from selection biases that limit the generalizability of results to
the target population. We consider integrating a non-probability sample with a
probability sample which provides high-dimensional representative covariate
information of the target population. We propose a two-step approach for
variable selection and finite population inference. In the first step, we use
penalized estimating equations with folded-concave penalties to select
important variables for the sampling score of selection into the
non-probability sample and the outcome model. We show that the penalized
estimating equation approach enjoys the selection consistency property for
general probability samples. The major technical hurdle is due to the possible
dependence of the sample under the finite population framework. To overcome
this challenge, we construct martingales which enable us to apply Bernstein
concentration inequality for martingales. In the second step, we focus on a
doubly robust estimator of the finite population mean and re-estimate the
nuisance model parameters by minimizing the asymptotic squared bias of the
doubly robust estimator. This estimating strategy mitigates the possible
first-step selection error and renders the doubly robust estimator root-n
consistent if either the sampling probability or the outcome model is correctly
specified
The Degeneracy of Galaxy Formation Models
We develop a new formalism for modeling the formation and evolution of
galaxies within a hierarchical universe. Similarly to standard semi-analytical
models we trace galaxies inside dark-matter merger-trees. The formalism
includes treatment of feedback, star-formation, cooling, smooth accretion, gas
stripping in satellite galaxies, and merger-induced star bursts. However,
unlike in other models, each process is assumed to have an efficiency which
depends only on the host halo mass and redshift. This allows us to describe the
various components of the model in a simple and transparent way. By allowing
the efficiencies to have any value for a given halo mass and redshift, we can
easily encompass a large range of scenarios. To demonstrate this point, we
examine several different galaxy formation models, which are all consistent
with the observational data. Each model is characterized by a different unique
feature: cold accretion in low mass haloes, zero feedback, stars formed only in
merger-induced bursts, and shutdown of star-formation after mergers. Using
these models we are able to examine the degeneracy inherent in galaxy formation
models, and look for observational data that will help to break this
degeneracy. We show that the full distribution of star-formation rates in a
given stellar mass bin is promising in constraining the models. We compare our
approach in detail to the semi-analytical model of De Lucia & Blaizot. It is
shown that our formalism is able to produce a very similar population of
galaxies once the same median efficiencies per halo mass and redshift are being
used. We provide a public version of the model galaxies on our web-page, along
with a tool for running models with user-defined parameters. Our model is able
to provide results for a 62.5 h^{-1} Mpc box within just a few seconds.Comment: Accepted for publication in MNRAS. Fig 6 & 7 corrected. For the
project page which allows running your own model, see
http://www.mpa-garching.mpg.de/galform/sesam
Spatiotemporal correlations of earthquakes in the continuum limit of the one-dimensional Burridge-Knopoff model
Spatiotemporal correlations of the one-dimensional spring-block
(Burridge-Knopoff) model of earthquakes, either with or without the viscosity
term, are studied by means of numerical computer simulations. The continuum
limit of the model is examined by systematically investigating the model
properties with varying the block-size parameter a toward a\to 0. The Kelvin
viscosity term is introduced so that the model dynamics possesses a sensible
continuum limit. In the presence of the viscosity term, many of the properties
of the original discrete BK model are kept qualitatively unchanged even in the
continuum limit, although the size of minimum earthquake gets smaller as a gets
smaller. One notable exception is the existence/non-existence of the
doughnut-like quiescence prior to the mainshock. Although large events of the
original discrete BK model accompany seismic acceleration together with a
doughnut-like quiescence just before the mainshock, the spatial range of the
doughnut-like quiescence becomes narrower as a gets smaller, and in the
continuum limit, the doughnut-like quiescence might vanish altogether. The
doughnut-like quiescence observed in the discrete BK model is then a phenomenon
closely related to the short-length cut-off scale of the model
Recent Developments in Maser Theory
This review covers selected developments in maser theory since the previous
meeting, "Cosmic Masers: From Proto-Stars to Black Holes" (Migenes & Reid
2002). Topics included are time variability of fundamental constants, pumping
of OH megamasers and indicators for differentiating disks from bi-directional
outflows.Comment: Review presented at IAU symposium 242, "Astrophysical Masers and
their Environments
Fermion-Higgs model with strong Wilson-Yukawa coupling in two dimensions
The fermion mass spectrum is studied in the quenched approximation in the
strong coupling vortex phase (VXS) of a globally U(1)U(1)
symmetric scalar-fermion model in two dimensions. In this phase fermion
doublers can be completely removed from the physical spectrum by means of a
strong Wilson-Yukawa coupling. The lowest lying fermion spectrum in this phase
consists most probably only of a massive Dirac fermion which has charge zero
with respect to the group. We give evidence that the fermion which is
charged with respect to that subgroup is absent in the VXS phase. When the
gauge fields are turned on, the neutral fermion may couple chirally to
the massive vector boson state in the confinement phase. The outcome is very
similar to our findings in the strong coupling symmetric phase (PMS) of
fermion-Higgs models with Wilson-Yukawa coupling in four dimensions, with the
exception that in four dimensions the neutral fermion does most probably
decouple from the bosonic bound states.Comment: 21 pages, 6 postscript figures (appended), Amsterdam ITFA 92-21, HLRZ
J\"ulich 92-5
Peaks in the cosmological density field: parameter constraints from 2dF Galaxy Redshift Survey data
We use the number density of peaks in the smoothed cosmological density field
taken from the 2dF Galaxy Redshift Survey to constrain parameters related to
the power spectrum of mass fluctuations, n (the spectral index), dn/d(lnk)
(rolling in the spectral index), and the neutrino mass, m_nu. In a companion
paper we use N-body simulations to study how the peak density responds to
changes in the power spectrum, the presence of redshift distortions and the
relationship between galaxies and dark matter halos. In the present paper we
make measurements of the peak density from 2dF Galaxy Redshift Survey data, for
a range of smoothing filter scales from 4-33 h^-1 Mpc. We use these
measurements to constrain the cosmological parameters, finding n=1.36
(+0.75)(-0.64), m_nu < 1.76 eV, dn/d(lnk)=-0.012 (+0.192)(-0.208), at the 68 %
confidence level, where m_nu is the total mass of three massive neutrinos. At
95% confidence we find m_nu< 2.48 eV. These measurements represent an
alternative way to constrain cosmological parameters to the usual direct fits
to the galaxy power spectrum, and are expected to be relatively insensitive to
non-linear clustering evolution and galaxy biasing.Comment: Accepted for Publication in MNRAS on Sept 25, 2009. Abstract modified
to remove LaTex markup
- …