384 research outputs found
An Empirical Exploration of Southeast Asian American Residential Patterns in the San Francisco Bay Area (2000–2019)
This paper explores three methods of reporting residential patterns: (1) concentration profiles, (2) density maps, and (3) proximity profiles. I analyze U.S. Census data to map and evaluate the residential patterns for Southeast Asian Americans in the nine-county San Francisco Bay Area. Drawing from the field of urban planning, I report two measures of segregation and concentration (a) dissimilarity indices and (b) spatial proximity indices, and I discuss their limitations. Since mapping and spatial statistics are essential to understanding the histories, development, and advancement of Southeast Asian American communities, it is important to promote their broad usage. The paper\u27s findings lend evidence to three arguments: (1) pioneering moments (the establishment of new immigrant communities) can in fact start path dependent community growth, (2) clustering and dispersion to some extent can be predicted by classic theories of spatial assimilation, but new dynamics are playing out in today’s communities from Asian and Latino origins, including Southeast Asian American communities, and (3) residential clustering cases are circumstantial, dependent on unique local circumstances
Real-time Optimal Resource Allocation for Embedded UAV Communication Systems
We consider device-to-device (D2D) wireless information and power transfer
systems using an unmanned aerial vehicle (UAV) as a relay-assisted node. As the
energy capacity and flight time of UAVs is limited, a significant issue in
deploying UAV is to manage energy consumption in real-time application, which
is proportional to the UAV transmit power. To tackle this important issue, we
develop a real-time resource allocation algorithm for maximizing the energy
efficiency by jointly optimizing the energy-harvesting time and power control
for the considered (D2D) communication embedded with UAV. We demonstrate the
effectiveness of the proposed algorithms as running time for solving them can
be conducted in milliseconds.Comment: 11 pages, 5 figures, 1 table. This paper is accepted for publication
on IEEE Wireless Communications Letter
Recommended from our members
Future energy, fuel cells, and solid-oxide fuel-cell technology
According to the US Department of Energy’s Energy Infomation Administration (EIA) (International Energy Outlook 2017), world energy consumption will increase 28% between 2015 and 2040, rising from 575 quadrillion Btu (∼606 quadrillion kJ) in 2015 to 736 quadrillion Btu (∼776 quadrillion kJ) in 2040. EIA predicts increases in consumption for all energy sources (excluding coal, which is estimated to remain flat)—fossil (petroleum and other liquids, natural gas), renewables (solar, wind, hydropower), and nuclear. Although renewables are the world’s fastest growing form of energy, fossil fuels are expected to continue to supply more than three-quarters of the energy used worldwide. Among the various fossil fuels, natural gas is the fastest growing, with a projected increase of 43% from 2015 to 2040. As the use of fossil fuels increases, the EIA projects world energy-related carbon dioxide emission to grow from ∼34 billion metric tons in 2015 to ∼40 billion metric tonnes in 2040 (an average 0.6% increase per year)
Learning Invariant Representations with a Nonparametric Nadaraya-Watson Head
Machine learning models will often fail when deployed in an environment with
a data distribution that is different than the training distribution. When
multiple environments are available during training, many methods exist that
learn representations which are invariant across the different distributions,
with the hope that these representations will be transportable to unseen
domains. In this work, we present a nonparametric strategy for learning
invariant representations based on the recently-proposed Nadaraya-Watson (NW)
head. The NW head makes a prediction by comparing the learned representations
of the query to the elements of a support set that consists of labeled data. We
demonstrate that by manipulating the support set, one can encode different
causal assumptions. In particular, restricting the support set to a single
environment encourages the model to learn invariant features that do not depend
on the environment. We present a causally-motivated setup for our modeling and
training strategy and validate on three challenging real-world domain
generalization tasks in computer vision.Comment: Accepted to NeurIPS 202
Increased success probability in Hardy's nonlocality: Theory and demonstration
Depending on the way one measures, quantum nonlocality might manifest more
visibly. Using basis transformations and interactions on a particle pair, Hardy
logically argued that any local hidden variable theory leads to a paradox.
Extended from the original work, we introduce a quantum nonlocal scheme for
n-particle systems using two distinct approaches. First, a theoretical model is
derived with analytical results for Hardy's nonlocality conditions and
probability. Second, a quantum simulation using quantum circuits is constructed
that matches very well to the analytical theory. When demonstrated on real
quantum computers for n=3, we obtain reasonable results compared to theory.
Even at macroscopic scales as n grows, the success probability asymptotes
15.6%, which is stronger than previous results.Comment: 4 pages, 4 figure
Robust Learning via Conditional Prevalence Adjustment
Healthcare data often come from multiple sites in which the correlations
between confounding variables can vary widely. If deep learning models exploit
these unstable correlations, they might fail catastrophically in unseen sites.
Although many methods have been proposed to tackle unstable correlations, each
has its limitations. For example, adversarial training forces models to
completely ignore unstable correlations, but doing so may lead to poor
predictive performance. Other methods (e.g. Invariant risk minimization [4])
try to learn domain-invariant representations that rely only on stable
associations by assuming a causal data-generating process (input X causes class
label Y ). Thus, they may be ineffective for anti-causal tasks (Y causes X),
which are common in computer vision. We propose a method called CoPA
(Conditional Prevalence-Adjustment) for anti-causal tasks. CoPA assumes that
(1) generation mechanism is stable, i.e. label Y and confounding variable(s) Z
generate X, and (2) the unstable conditional prevalence in each site E fully
accounts for the unstable correlations between X and Y . Our crucial
observation is that confounding variables are routinely recorded in healthcare
settings and the prevalence can be readily estimated, for example, from a set
of (Y, Z) samples (no need for corresponding samples of X). CoPA can work even
if there is a single training site, a scenario which is often overlooked by
existing methods. Our experiments on synthetic and real data show CoPA beating
competitive baselines.Comment: Accepted at WAC
On the solutions of universal differential equation by noncommutative Picard-Vessiot theory
Basing on Picard-Vessiot theory of noncommutative differential equations and
algebraic combinatorics on noncommutative formal series with holomorphic
coefficients, various recursive constructions of sequences of grouplike series
converging to solutions of universal differential equation are proposed. Basing
on monoidal factorizations, these constructions intensively use diagonal series
and various pairs of bases in duality, in concatenation-shuffle bialgebra and
in a Loday's generalized bialgebra. As applications, the unique solution,
satisfying asymptotic conditions, of Knizhnik-Zamolodchikov equations is
provided by d\'evissage
- …