12,595 research outputs found
Density Matching for Bilingual Word Embedding
Recent approaches to cross-lingual word embedding have generally been based
on linear transformations between the sets of embedding vectors in the two
languages. In this paper, we propose an approach that instead expresses the two
monolingual embedding spaces as probability densities defined by a Gaussian
mixture model, and matches the two densities using a method called normalizing
flow. The method requires no explicit supervision, and can be learned with only
a seed dictionary of words that have identical strings. We argue that this
formulation has several intuitively attractive properties, particularly with
the respect to improving robustness and generalization to mappings between
difficult language pairs or word pairs. On a benchmark data set of bilingual
lexicon induction and cross-lingual word similarity, our approach can achieve
competitive or superior performance compared to state-of-the-art published
results, with particularly strong results being found on etymologically distant
and/or morphologically rich languages.Comment: Accepted by NAACL-HLT 201
Exploring the (Efficient) Frontiers of Portfolio Optimization
The cardinality-constrained portfolio optimization problem is NP-hard. Its Pareto front (or the Efficient Frontier - EF) is usually calculated by stochastic algorithms, including EAs. However, in certain cases the EF may be decomposed into a union of sub-EFs. In this work we propose a systematic process of excluding sub-EFs dominated by others, enabling us to calculate non-dominated sub-EFs. We then calculate whole EFs to a high degree of accuracy for small cardinalities, providing an alternative to EAs in those cases. We can use also this to provide insight into EAs on the problem
Optimizing egalitarian performance in the side-effects model of colocation for data center resource management
In data centers, up to dozens of tasks are colocated on a single physical
machine. Machines are used more efficiently, but tasks' performance
deteriorates, as colocated tasks compete for shared resources. As tasks are
heterogeneous, the resulting performance dependencies are complex. In our
previous work [18] we proposed a new combinatorial optimization model that uses
two parameters of a task - its size and its type - to characterize how a task
influences the performance of other tasks allocated to the same machine.
In this paper, we study the egalitarian optimization goal: maximizing the
worst-off performance. This problem generalizes the classic makespan
minimization on multiple processors (P||Cmax). We prove that
polynomially-solvable variants of multiprocessor scheduling are NP-hard and
hard to approximate when the number of types is not constant. For a constant
number of types, we propose a PTAS, a fast approximation algorithm, and a
series of heuristics. We simulate the algorithms on instances derived from a
trace of one of Google clusters. Algorithms aware of jobs' types lead to better
performance compared with algorithms solving P||Cmax.
The notion of type enables us to model degeneration of performance caused by
using standard combinatorial optimization methods. Types add a layer of
additional complexity. However, our results - approximation algorithms and good
average-case performance - show that types can be handled efficiently.Comment: Author's version of a paper published in Euro-Par 2017 Proceedings,
extends the published paper with addtional results and proof
The Progenitors of Type Ia Supernovae: Are They Supersoft Sources?
In a canonical model, the progenitors of Type Ia supernovae (SNe Ia) are
accreting, nuclear-burning white dwarfs (NBWDs), which explode when the white
dwarf reaches the Chandrasekhar mass, M_C. Such massive NBWDs are hot (kT ~100
eV), luminous (L ~ 10^{38} erg/s), and are potentially observable as luminous
supersoft X-ray sources (SSSs). During the past several years, surveys for soft
X-ray sources in external galaxies have been conducted. This paper shows that
the results falsify the hypothesis that a large fraction of progenitors are
NBWDs which are presently observable as SSSs. The data also place limits on
sub-M_C models. While Type Ia supernova progenitors may pass through one or
more phases of SSS activity, these phases are far shorter than the time needed
to accrete most of the matter that brings them close to M_C.Comment: submitted to ApJ 18 November 2009; 17 pages, 2 figure
The Progenitors of Type Ia Supernovae: II. Are they Double-Degenerate Binaries? The Symbiotic Channel
In order for a white dwarf (WD) to achieve the Chandrasekhar mass, M_C, and
explode as a Type Ia supernova (SNIa), it must interact with another star,
either accreting matter from or merging with it. The failure to identify the
types of binaries which produce SNeIa is the "progenitor problem". Its solution
is required if we are to utilize the full potential of SNeIa to elucidate basic
cosmological and physical principles. In single-degenerate models, a WD
accretes and burns matter at high rates. Nuclear-burning WDs (NBWDs) with mass
close to M_C are hot and luminous, potentially detectable as supersoft x-ray
sources (SSSs). In previous work we showed that > 90-99% of the required number
of progenitors do not appear as SSSs during most of the crucial phase of mass
increase. The obvious implication is that double-degenerate (DD) binaries form
the main class of progenitors. We show in this paper, however, that many
binaries that later become DDs must pass through a long-lived NBWD phase during
which they are potentially detectable as SSSs. The paucity of SSSs is therefore
not a strong argument in favor of DD models. Those NBWDs that are the
progenitors of DD binaries are likely to appear as symbiotic binaries for
intervals > 10^6 years. In fact, symbiotic pre-DDs should be common, whether or
not the WDs eventually produce SNeIa. The key to solving the progenitor problem
lies in understanding the appearance of NBWDs. Most do not appear as SSSs most
of the time. We therefore consider the evolution of NBWDs to address the
question of what their appearance may be and how we can hope to detect them.Comment: 24 pages; 5 figures; submitted to Ap
The Millennium Galaxy Catalogue: The -- derived supermassive black hole mass function
Supermassive black hole mass estimates are derived for 1743 galaxies from the
Millennium Galaxy Catalogue using the recently revised empirical relation
between supermassive black hole mass and the luminosity of the host spheroid.
The MGC spheroid luminosities are based on -bulge plus
exponential-disc decompositions. The majority of black hole masses reside
between and an upper limit of . Using
previously determined space density weights, we derive the SMBH mass function
which we fit with a Schechter-like function. Integrating the black hole mass
function over gives a supermassive black
hole mass density of (
Mpc for early-type galaxies and ( Mpc for late-type galaxies. The errors are estimated from
Monte Carlo simulations which include the uncertainties in the --
relation, the luminosity of the host spheroid and the intrinsic scatter of the
-- relation. Assuming supermassive black holes form via baryonic
accretion we find that ( per cent of the Universe's
baryons are currently locked up in supermassive black holes. This result is
consistent with our previous estimate based on the -- (S{\'e}rsic
index) relation.Comment: 10 pages, 6 figures, accepted to MNRA
Exploring the moderation relationships among supply chain integration, procurement performance, and the buyer-supplier trust.
Procurement is a key function within Automotive supply chain, especially during the Brexit period. Supply Chain integration has been widely applied with in manufacturing/automotive industry. However, the extant literature lacks exploration of its impact on procurement performance. And this relationship closely correlated with trust between suppliers and buyers. This research explores a three-way moderation effect among supply chain integration, supplier-buyer trust and procurement performance empirically via 126 responses by UK automotive manufacturers
Accuracy and Stability of Virtual Source Method for Numerical Simulations of Nonlinear Water Waves
The virtual source method (VSM) developed by Langfeld et al., (2016) is based upon the integral equations derived by using Green’s identity with Laplace’s equation for the velocity potential. These authors presented preliminary results using the method to simulate standing waves. In this paper, we numerically model a non-linear standing wave by using the VSM to illustrate the energy and volume conservation. Analytical formulas are derived to compute the volume and potential energy while the kinetic energy is computed by numerical integration. Results are compared with both theory and boundary element method (BEM)
A longitudinal analysis of judgement approaches to sustainability paradoxes
This research investigates how tourism executives heuristically navigate sustainable tourism paradoxes at a time of unprecedented global change. We do so longitudinally by applying a ‘then’ and ‘now’ perspective and structural narrative analysis to in-depth interview data collected in 2014 and again in 2022, posing the same questions to the same 12 world-wide renowned sustainable tourism executives. The research provides an original investigation of the paradox-mindset needed to grapple with complex challenges of carbon-creation in travel, competing stakeholder needs and how to manage growth with finite resources. Findings provide insight into sustainability paradoxes as mindsets vary between rejection, awareness and acceptance. Empathy ‘now’ replaces elitism ‘then’. Respondents reject the myth of sustainability sacrifice, instead acknowledging sustainability as a necessary driver for good business. Further, despite calls for greater ethical praxis, concrete action appears to fade in the face of self-interest and the ‘tourism saves’ mantra
- …