368 research outputs found
Dynamic Computation of Network Statistics via Updating Schema
In this paper we derive an updating scheme for calculating some important
network statistics such as degree, clustering coefficient, etc., aiming at
reduce the amount of computation needed to track the evolving behavior of large
networks; and more importantly, to provide efficient methods for potential use
of modeling the evolution of networks. Using the updating scheme, the network
statistics can be computed and updated easily and much faster than
re-calculating each time for large evolving networks. The update formula can
also be used to determine which edge/node will lead to the extremal change of
network statistics, providing a way of predicting or designing evolution rule
of networks.Comment: 17 pages, 6 figure
Eigenvalue Estimation of Differential Operators
We demonstrate how linear differential operators could be emulated by a
quantum processor, should one ever be built, using the Abrams-Lloyd algorithm.
Given a linear differential operator of order 2S, acting on functions
psi(x_1,x_2,...,x_D) with D arguments, the computational cost required to
estimate a low order eigenvalue to accuracy Theta(1/N^2) is
Theta((2(S+1)(1+1/nu)+D)log N) qubits and O(N^{2(S+1)(1+1/nu)} (D log N)^c)
gate operations, where N is the number of points to which each argument is
discretized, nu and c are implementation dependent constants of O(1). Optimal
classical methods require Theta(N^D) bits and Omega(N^D) gate operations to
perform the same eigenvalue estimation. The Abrams-Lloyd algorithm thereby
leads to exponential reduction in memory and polynomial reduction in gate
operations, provided the domain has sufficiently large dimension D >
2(S+1)(1+1/nu). In the case of Schrodinger's equation, ground state energy
estimation of two or more particles can in principle be performed with fewer
quantum mechanical gates than classical gates.Comment: significant content revisions: more algorithm details and brief
analysis of convergenc
Fast linear algebra is stable
In an earlier paper, we showed that a large class of fast recursive matrix
multiplication algorithms is stable in a normwise sense, and that in fact if
multiplication of -by- matrices can be done by any algorithm in
operations for any , then it can be done
stably in operations for any . Here we extend
this result to show that essentially all standard linear algebra operations,
including LU decomposition, QR decomposition, linear equation solving, matrix
inversion, solving least squares problems, (generalized) eigenvalue problems
and the singular value decomposition can also be done stably (in a normwise
sense) in operations.Comment: 26 pages; final version; to appear in Numerische Mathemati
Approximating Spectral Impact of Structural Perturbations in Large Networks
Determining the effect of structural perturbations on the eigenvalue spectra
of networks is an important problem because the spectra characterize not only
their topological structures, but also their dynamical behavior, such as
synchronization and cascading processes on networks. Here we develop a theory
for estimating the change of the largest eigenvalue of the adjacency matrix or
the extreme eigenvalues of the graph Laplacian when small but arbitrary set of
links are added or removed from the network. We demonstrate the effectiveness
of our approximation schemes using both real and artificial networks, showing
in particular that we can accurately obtain the spectral ranking of small
subgraphs. We also propose a local iterative scheme which computes the relative
ranking of a subgraph using only the connectivity information of its neighbors
within a few links. Our results may not only contribute to our theoretical
understanding of dynamical processes on networks, but also lead to practical
applications in ranking subgraphs of real complex networks.Comment: 9 pages, 3 figures, 2 table
Smooth analysis of the condition number and the least singular value
Let \a be a complex random variable with mean zero and bounded variance.
Let be the random matrix of size whose entries are iid copies of
\a and be a fixed matrix of the same size. The goal of this paper is to
give a general estimate for the condition number and least singular value of
the matrix , generalizing an earlier result of Spielman and Teng for
the case when \a is gaussian.
Our investigation reveals an interesting fact that the "core" matrix does
play a role on tail bounds for the least singular value of . This
does not occur in Spielman-Teng studies when \a is gaussian.
Consequently, our general estimate involves the norm .
In the special case when is relatively small, this estimate is nearly
optimal and extends or refines existing results.Comment: 20 pages. An erratum to the published version has been adde
Flexible and Robust Privacy-Preserving Implicit Authentication
Implicit authentication consists of a server authenticating a user based on
the user's usage profile, instead of/in addition to relying on something the
user explicitly knows (passwords, private keys, etc.). While implicit
authentication makes identity theft by third parties more difficult, it
requires the server to learn and store the user's usage profile. Recently, the
first privacy-preserving implicit authentication system was presented, in which
the server does not learn the user's profile. It uses an ad hoc two-party
computation protocol to compare the user's fresh sampled features against an
encrypted stored user's profile. The protocol requires storing the usage
profile and comparing against it using two different cryptosystems, one of them
order-preserving; furthermore, features must be numerical. We present here a
simpler protocol based on set intersection that has the advantages of: i)
requiring only one cryptosystem; ii) not leaking the relative order of fresh
feature samples; iii) being able to deal with any type of features (numerical
or non-numerical).
Keywords: Privacy-preserving implicit authentication, privacy-preserving set
intersection, implicit authentication, active authentication, transparent
authentication, risk mitigation, data brokers.Comment: IFIP SEC 2015-Intl. Information Security and Privacy Conference, May
26-28, 2015, IFIP AICT, Springer, to appea
The Asymptotics of Wilkinson's Iteration: Loss of Cubic Convergence
One of the most widely used methods for eigenvalue computation is the
iteration with Wilkinson's shift: here the shift is the eigenvalue of the
bottom principal minor closest to the corner entry. It has been a
long-standing conjecture that the rate of convergence of the algorithm is
cubic. In contrast, we show that there exist matrices for which the rate of
convergence is strictly quadratic. More precisely, let be the matrix having only two nonzero entries and let
be the set of real, symmetric tridiagonal matrices with the same spectrum
as . There exists a neighborhood of which is
invariant under Wilkinson's shift strategy with the following properties. For
, the sequence of iterates exhibits either strictly
quadratic or strictly cubic convergence to zero of the entry . In
fact, quadratic convergence occurs exactly when . Let be
the union of such quadratically convergent sequences : the set has
Hausdorff dimension 1 and is a union of disjoint arcs meeting at
, where ranges over a Cantor set.Comment: 20 pages, 8 figures. Some passages rewritten for clarit
Three dimensional numerical relativity: the evolution of black holes
We report on a new 3D numerical code designed to solve the Einstein equations
for general vacuum spacetimes. This code is based on the standard 3+1 approach
using cartesian coordinates. We discuss the numerical techniques used in
developing this code, and its performance on massively parallel and vector
supercomputers. As a test case, we present evolutions for the first 3D black
hole spacetimes. We identify a number of difficulties in evolving 3D black
holes and suggest approaches to overcome them. We show how special treatment of
the conformal factor can lead to more accurate evolution, and discuss
techniques we developed to handle black hole spacetimes in the absence of
symmetries. Many different slicing conditions are tested, including geodesic,
maximal, and various algebraic conditions on the lapse. With current
resolutions, limited by computer memory sizes, we show that with certain lapse
conditions we can evolve the black hole to about , where is the
black hole mass. Comparisons are made with results obtained by evolving
spherical initial black hole data sets with a 1D spherically symmetric code. We
also demonstrate that an ``apparent horizon locking shift'' can be used to
prevent the development of large gradients in the metric functions that result
from singularity avoiding time slicings. We compute the mass of the apparent
horizon in these spacetimes, and find that in many cases it can be conserved to
within about 5\% throughout the evolution with our techniques and current
resolution.Comment: 35 pages, LaTeX with RevTeX 3.0 macros. 27 postscript figures taking
7 MB of space, uuencoded and gz-compressed into a 2MB uufile. Also available
at http://jean-luc.ncsa.uiuc.edu/Papers/ and mpeg simulations at
http://jean-luc.ncsa.uiuc.edu/Movies/ Submitted to Physical Review
Stellar GADGET: A smooth particle hydrodynamics code for stellar astrophysics and its application to Type Ia supernovae from white dwarf mergers
Mergers of two carbon-oxygen white dwarfs have long been suspected to be
progenitors of Type Ia Supernovae. Here we present our modifications to the
cosmological smoothed particle hydrodynamics code Gadget to apply it to stellar
physics including but not limited to mergers of white dwarfs. We demonstrate a
new method to map a one-dimensional profile of an object in hydrostatic
equilibrium to a stable particle distribution. We use the code to study the
effect of initial conditions and resolution on the properties of the merger of
two white dwarfs. We compare mergers with approximate and exact binary initial
conditions and find that exact binary initial conditions lead to a much more
stable binary system but there is no difference in the properties of the actual
merger. In contrast, we find that resolution is a critical issue for
simulations of white dwarf mergers. Carbon burning hotspots which may lead to a
detonation in the so-called violent merger scenario emerge only in simulations
with sufficient resolution but independent of the type of binary initial
conditions. We conclude that simulations of white dwarf mergers which attempt
to investigate their potential for Type Ia supernovae should be carried out
with at least 10^6 particles.Comment: 11 pages, 6 figures, accepted for publication in MNRA
- …
