803 research outputs found
Maximizing Welfare in Social Networks under a Utility Driven Influence Diffusion Model
Motivated by applications such as viral marketing, the problem of influence
maximization (IM) has been extensively studied in the literature. The goal is
to select a small number of users to adopt an item such that it results in a
large cascade of adoptions by others. Existing works have three key
limitations. (1) They do not account for economic considerations of a user in
buying/adopting items. (2) Most studies on multiple items focus on competition,
with complementary items receiving limited attention. (3) For the network
owner, maximizing social welfare is important to ensure customer loyalty, which
is not addressed in prior work in the IM literature. In this paper, we address
all three limitations and propose a novel model called UIC that combines
utility-driven item adoption with influence propagation over networks. Focusing
on the mutually complementary setting, we formulate the problem of social
welfare maximization in this novel setting. We show that while the objective
function is neither submodular nor supermodular, surprisingly a simple greedy
allocation algorithm achieves a factor of of the optimum
expected social welfare. We develop \textsf{bundleGRD}, a scalable version of
this approximation algorithm, and demonstrate, with comprehensive experiments
on real and synthetic datasets, that it significantly outperforms all
baselines.Comment: 33 page
PROTECTION OF PRIVACY THROUGH MICROAGGREGATION
A proposal for maintaining privacy protection in large data bases by the use of partially aggregated data instead of the original individual data. Proper micro aggregation techniques can serve to protect the confidential nature of the individual data with minimumal information loss. Reference:Data base4s, Computers and the Social Sciences, R. Bisco (ed.),Wiley, 1970, pp. 261-272Privacy,data protection, micro-aggregation,
Precision Jet Substructure from Boosted Event Shapes
Jet substructure has emerged as a critical tool for LHC searches, but studies
so far have relied heavily on shower Monte Carlo simulations, which formally
approximate QCD at leading-log level. We demonstrate that systematic
higher-order QCD computations of jet substructure can be carried out by
boosting global event shapes by a large momentum Q, and accounting for effects
due to finite jet size, initial-state radiation (ISR), and the underlying event
(UE) as 1/Q corrections. In particular, we compute the 2-subjettiness
substructure distribution for boosted Z -> q qbar events at the LHC at
next-to-next-to-next-to-leading-log order. The calculation is greatly
simplified by recycling the known results for the thrust distribution in e+ e-
collisions. The 2-subjettiness distribution quickly saturates, becoming Q
independent for Q > 400 GeV. Crucially, the effects of jet contamination from
ISR/UE can be subtracted out analytically at large Q, without knowing their
detailed form. Amusingly, the Q=infinity and Q=0 distributions are related by a
scaling by e, up to next-to-leading-log order.Comment: 5 pages, 7 figure
Neutron Flux at the Gran Sasso Underground LaboratoryRevisited
The neutron flux induced by radioactivity at the Gran Sasso underground
laboratory is revisited. We have performed calculations and Monte Carlo
simulations; the results offer an independent check to the available
experimental data reported by different authors, which vary rather widely. This
study gives detailed information on the expected spectrum and on the
variability of the neutron flux due to possible variations of the water content
of the environment.Comment: 14 pages, 3 figures, systematic uncertainties adde
On the Design of Cryptographic Primitives
The main objective of this work is twofold. On the one hand, it gives a brief
overview of the area of two-party cryptographic protocols. On the other hand,
it proposes new schemes and guidelines for improving the practice of robust
protocol design. In order to achieve such a double goal, a tour through the
descriptions of the two main cryptographic primitives is carried out. Within
this survey, some of the most representative algorithms based on the Theory of
Finite Fields are provided and new general schemes and specific algorithms
based on Graph Theory are proposed
Lift-and-Round to Improve Weighted Completion Time on Unrelated Machines
We consider the problem of scheduling jobs on unrelated machines so as to
minimize the sum of weighted completion times. Our main result is a
-approximation algorithm for some fixed , improving upon the
long-standing bound of 3/2 (independently due to Skutella, Journal of the ACM,
2001, and Sethuraman & Squillante, SODA, 1999). To do this, we first introduce
a new lift-and-project based SDP relaxation for the problem. This is necessary
as the previous convex programming relaxations have an integrality gap of
. Second, we give a new general bipartite-rounding procedure that produces
an assignment with certain strong negative correlation properties.Comment: 21 pages, 4 figure
Limitations to Frechet's Metric Embedding Method
Frechet's classical isometric embedding argument has evolved to become a
major tool in the study of metric spaces. An important example of a Frechet
embedding is Bourgain's embedding. The authors have recently shown that for
every e>0 any n-point metric space contains a subset of size at least n^(1-e)
which embeds into l_2 with distortion O(\log(2/e) /e). The embedding we used is
non-Frechet, and the purpose of this note is to show that this is not
coincidental. Specifically, for every e>0, we construct arbitrarily large
n-point metric spaces, such that the distortion of any Frechet embedding into
l_p on subsets of size at least n^{1/2 + e} is \Omega((\log n)^{1/p}).Comment: 10 pages, 1 figur
Approximate Quantum Fourier Transform and Decoherence
We discuss the advantages of using the approximate quantum Fourier transform
(AQFT) in algorithms which involve periodicity estimations. We analyse quantum
networks performing AQFT in the presence of decoherence and show that extensive
approximations can be made before the accuracy of AQFT (as compared with
regular quantum Fourier transform) is compromised. We show that for some
computations an approximation may imply a better performance.Comment: 14 pages, 10 fig. (8 *.eps files). More information on
http://eve.physics.ox.ac.uk/QChome.html
http://www.physics.helsinki.fi/~kasuomin
http://www.physics.helsinki.fi/~kira/group.htm
Distance and the pattern of intra-European trade
Given an undirected graph G = (V, E) and subset of terminals T ⊆ V, the element-connectivity κ ′ G (u, v) of two terminals u, v ∈ T is the maximum number of u-v paths that are pairwise disjoint in both edges and non-terminals V \ T (the paths need not be disjoint in terminals). Element-connectivity is more general than edge-connectivity and less general than vertex-connectivity. Hind and Oellermann [21] gave a graph reduction step that preserves the global element-connectivity of the graph. We show that this step also preserves local connectivity, that is, all the pairwise element-connectivities of the terminals. We give two applications of this reduction step to connectivity and network design problems. • Given a graph G and disjoint terminal sets T1, T2,..., Tm, we seek a maximum number of elementdisjoint Steiner forests where each forest connects each Ti. We prove that if each Ti is k element k connected then there exist Ω( log hlog m) element-disjoint Steiner forests, where h = | i Ti|. If G is planar (or more generally, has fixed genus), we show that there exist Ω(k) Steiner forests. Our proofs are constructive, giving poly-time algorithms to find these forests; these are the first non-trivial algorithms for packing element-disjoint Steiner Forests. • We give a very short and intuitive proof of a spider-decomposition theorem of Chuzhoy and Khanna [12] in the context of the single-sink k-vertex-connectivity problem; this yields a simple and alternative analysis of an O(k log n) approximation. Our results highlight the effectiveness of the element-connectivity reduction step; we believe it will find more applications in the future
The Age-Redshift Relation for Standard Cosmology
We present compact, analytic expressions for the age-redshift relation
for standard Friedmann-Lema\^ \itre-Robertson-Walker (FLRW)
cosmology. The new expressions are given in terms of incomplete Legendre
elliptic integrals and evaluate much faster than by direct numerical
integration.Comment: 13 pages, 3 figure
- …