15,000 research outputs found
Density Matching for Bilingual Word Embedding
Recent approaches to cross-lingual word embedding have generally been based
on linear transformations between the sets of embedding vectors in the two
languages. In this paper, we propose an approach that instead expresses the two
monolingual embedding spaces as probability densities defined by a Gaussian
mixture model, and matches the two densities using a method called normalizing
flow. The method requires no explicit supervision, and can be learned with only
a seed dictionary of words that have identical strings. We argue that this
formulation has several intuitively attractive properties, particularly with
the respect to improving robustness and generalization to mappings between
difficult language pairs or word pairs. On a benchmark data set of bilingual
lexicon induction and cross-lingual word similarity, our approach can achieve
competitive or superior performance compared to state-of-the-art published
results, with particularly strong results being found on etymologically distant
and/or morphologically rich languages.Comment: Accepted by NAACL-HLT 201
The Progenitors of Type Ia Supernovae: II. Are they Double-Degenerate Binaries? The Symbiotic Channel
In order for a white dwarf (WD) to achieve the Chandrasekhar mass, M_C, and
explode as a Type Ia supernova (SNIa), it must interact with another star,
either accreting matter from or merging with it. The failure to identify the
types of binaries which produce SNeIa is the "progenitor problem". Its solution
is required if we are to utilize the full potential of SNeIa to elucidate basic
cosmological and physical principles. In single-degenerate models, a WD
accretes and burns matter at high rates. Nuclear-burning WDs (NBWDs) with mass
close to M_C are hot and luminous, potentially detectable as supersoft x-ray
sources (SSSs). In previous work we showed that > 90-99% of the required number
of progenitors do not appear as SSSs during most of the crucial phase of mass
increase. The obvious implication is that double-degenerate (DD) binaries form
the main class of progenitors. We show in this paper, however, that many
binaries that later become DDs must pass through a long-lived NBWD phase during
which they are potentially detectable as SSSs. The paucity of SSSs is therefore
not a strong argument in favor of DD models. Those NBWDs that are the
progenitors of DD binaries are likely to appear as symbiotic binaries for
intervals > 10^6 years. In fact, symbiotic pre-DDs should be common, whether or
not the WDs eventually produce SNeIa. The key to solving the progenitor problem
lies in understanding the appearance of NBWDs. Most do not appear as SSSs most
of the time. We therefore consider the evolution of NBWDs to address the
question of what their appearance may be and how we can hope to detect them.Comment: 24 pages; 5 figures; submitted to Ap
The Progenitors of Type Ia Supernovae: Are They Supersoft Sources?
In a canonical model, the progenitors of Type Ia supernovae (SNe Ia) are
accreting, nuclear-burning white dwarfs (NBWDs), which explode when the white
dwarf reaches the Chandrasekhar mass, M_C. Such massive NBWDs are hot (kT ~100
eV), luminous (L ~ 10^{38} erg/s), and are potentially observable as luminous
supersoft X-ray sources (SSSs). During the past several years, surveys for soft
X-ray sources in external galaxies have been conducted. This paper shows that
the results falsify the hypothesis that a large fraction of progenitors are
NBWDs which are presently observable as SSSs. The data also place limits on
sub-M_C models. While Type Ia supernova progenitors may pass through one or
more phases of SSS activity, these phases are far shorter than the time needed
to accrete most of the matter that brings them close to M_C.Comment: submitted to ApJ 18 November 2009; 17 pages, 2 figure
Optimizing egalitarian performance in the side-effects model of colocation for data center resource management
In data centers, up to dozens of tasks are colocated on a single physical
machine. Machines are used more efficiently, but tasks' performance
deteriorates, as colocated tasks compete for shared resources. As tasks are
heterogeneous, the resulting performance dependencies are complex. In our
previous work [18] we proposed a new combinatorial optimization model that uses
two parameters of a task - its size and its type - to characterize how a task
influences the performance of other tasks allocated to the same machine.
In this paper, we study the egalitarian optimization goal: maximizing the
worst-off performance. This problem generalizes the classic makespan
minimization on multiple processors (P||Cmax). We prove that
polynomially-solvable variants of multiprocessor scheduling are NP-hard and
hard to approximate when the number of types is not constant. For a constant
number of types, we propose a PTAS, a fast approximation algorithm, and a
series of heuristics. We simulate the algorithms on instances derived from a
trace of one of Google clusters. Algorithms aware of jobs' types lead to better
performance compared with algorithms solving P||Cmax.
The notion of type enables us to model degeneration of performance caused by
using standard combinatorial optimization methods. Types add a layer of
additional complexity. However, our results - approximation algorithms and good
average-case performance - show that types can be handled efficiently.Comment: Author's version of a paper published in Euro-Par 2017 Proceedings,
extends the published paper with addtional results and proof
Accuracy and Stability of Virtual Source Method for Numerical Simulations of Nonlinear Water Waves
The virtual source method (VSM) developed by Langfeld et al., (2016) is based upon the integral equations derived by using Green’s identity with Laplace’s equation for the velocity potential. These authors presented preliminary results using the method to simulate standing waves. In this paper, we numerically model a non-linear standing wave by using the VSM to illustrate the energy and volume conservation. Analytical formulas are derived to compute the volume and potential energy while the kinetic energy is computed by numerical integration. Results are compared with both theory and boundary element method (BEM)
Exploring the (Efficient) Frontiers of Portfolio Optimization
The cardinality-constrained portfolio optimization problem is NP-hard. Its Pareto front (or the Efficient Frontier - EF) is usually calculated by stochastic algorithms, including EAs. However, in certain cases the EF may be decomposed into a union of sub-EFs. In this work we propose a systematic process of excluding sub-EFs dominated by others, enabling us to calculate non-dominated sub-EFs. We then calculate whole EFs to a high degree of accuracy for small cardinalities, providing an alternative to EAs in those cases. We can use also this to provide insight into EAs on the problem
The Millennium Galaxy Catalogue: The -- derived supermassive black hole mass function
Supermassive black hole mass estimates are derived for 1743 galaxies from the
Millennium Galaxy Catalogue using the recently revised empirical relation
between supermassive black hole mass and the luminosity of the host spheroid.
The MGC spheroid luminosities are based on -bulge plus
exponential-disc decompositions. The majority of black hole masses reside
between and an upper limit of . Using
previously determined space density weights, we derive the SMBH mass function
which we fit with a Schechter-like function. Integrating the black hole mass
function over gives a supermassive black
hole mass density of (
Mpc for early-type galaxies and ( Mpc for late-type galaxies. The errors are estimated from
Monte Carlo simulations which include the uncertainties in the --
relation, the luminosity of the host spheroid and the intrinsic scatter of the
-- relation. Assuming supermassive black holes form via baryonic
accretion we find that ( per cent of the Universe's
baryons are currently locked up in supermassive black holes. This result is
consistent with our previous estimate based on the -- (S{\'e}rsic
index) relation.Comment: 10 pages, 6 figures, accepted to MNRA
A Relationship between Supermassive Black Hole Mass and the Total Gravitational Mass of the Host Galaxy
We investigate the correlation between the mass of a central supermassive
black hole and the total gravitational mass of the host galaxy (M_tot). The
results are based on 43 galaxy-scale strong gravitational lenses from the Sloan
Lens ACS (SLACS) Survey whose black hole masses were estimated through two
scaling relations: the relation between black hole mass and Sersic index (M_bh
- n) and the relation between black hole mass and stellar velocity dispersion
(M_bh - sigma). We use the enclosed mass within R_200, the radius within which
the density profile of the early type galaxy exceeds the critical density of
the Universe by a factor of 200, determined by gravitational lens models fitted
to HST imaging data, as a tracer of the total gravitational mass. The best fit
correlation, where M_bh is determined from M_bh - sigma relation, is log(M_bh)
= (8.18 +/- 0.11) + (1.55 +/- 0.31) (log(M_tot) - 13.0) over 2 orders of
magnitude in M_bh. From a variety of tests, we find that we cannot reliably
infer a connection between M_bh and M_tot from the M_bh - n relation. The M_bh
- M_tot relation provides some of the first, direct observational evidence to
test the prediction that supermassive black hole properties are determined by
the halo properties of the host galaxy.Comment: 29 pages, 10 figures, Accepted for publication in Ap
- …
