20,269 research outputs found
Truss Decomposition in Massive Networks
The k-truss is a type of cohesive subgraphs proposed recently for the study
of networks. While the problem of computing most cohesive subgraphs is NP-hard,
there exists a polynomial time algorithm for computing k-truss. Compared with
k-core which is also efficient to compute, k-truss represents the "core" of a
k-core that keeps the key information of, while filtering out less important
information from, the k-core. However, existing algorithms for computing
k-truss are inefficient for handling today's massive networks. We first improve
the existing in-memory algorithm for computing k-truss in networks of moderate
size. Then, we propose two I/O-efficient algorithms to handle massive networks
that cannot fit in main memory. Our experiments on real datasets verify the
efficiency of our algorithms and the value of k-truss.Comment: VLDB201
Temporal Graph Traversals: Definitions, Algorithms, and Applications
A temporal graph is a graph in which connections between vertices are active
at specific times, and such temporal information leads to completely new
patterns and knowledge that are not present in a non-temporal graph. In this
paper, we study traversal problems in a temporal graph. Graph traversals, such
as DFS and BFS, are basic operations for processing and studying a graph. While
both DFS and BFS are well-known simple concepts, it is non-trivial to adopt the
same notions from a non-temporal graph to a temporal graph. We analyze the
difficulties of defining temporal graph traversals and propose new definitions
of DFS and BFS for a temporal graph. We investigate the properties of temporal
DFS and BFS, and propose efficient algorithms with optimal complexity. In
particular, we also study important applications of temporal DFS and BFS. We
verify the efficiency and importance of our graph traversal algorithms in real
world temporal graphs
Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization
The Schatten-p quasi-norm is usually used to replace the standard
nuclear norm in order to approximate the rank function more accurately.
However, existing Schatten-p quasi-norm minimization algorithms involve
singular value decomposition (SVD) or eigenvalue decomposition (EVD) in each
iteration, and thus may become very slow and impractical for large-scale
problems. In this paper, we first define two tractable Schatten quasi-norms,
i.e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove
that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively,
which lead to the design of very efficient algorithms that only need to update
two much smaller factor matrices. We also design two efficient proximal
alternating linearized minimization algorithms for solving representative
matrix completion problems. Finally, we provide the global convergence and
performance guarantees for our algorithms, which have better convergence
properties than existing algorithms. Experimental results on synthetic and
real-world data show that our algorithms are more accurate than the
state-of-the-art methods, and are orders of magnitude faster.Comment: 16 pages, 5 figures, Appears in Proceedings of the 30th AAAI
Conference on Artificial Intelligence (AAAI), Phoenix, Arizona, USA, pp.
2016--2022, 201
Accelerated Variance Reduced Stochastic ADMM
Recently, many variance reduced stochastic alternating direction method of
multipliers (ADMM) methods (e.g.\ SAG-ADMM, SDCA-ADMM and SVRG-ADMM) have made
exciting progress such as linear convergence rates for strongly convex
problems. However, the best known convergence rate for general convex problems
is O(1/T) as opposed to O(1/T^2) of accelerated batch algorithms, where is
the number of iterations. Thus, there still remains a gap in convergence rates
between existing stochastic ADMM and batch algorithms. To bridge this gap, we
introduce the momentum acceleration trick for batch optimization into the
stochastic variance reduced gradient based ADMM (SVRG-ADMM), which leads to an
accelerated (ASVRG-ADMM) method. Then we design two different momentum term
update rules for strongly convex and general convex cases. We prove that
ASVRG-ADMM converges linearly for strongly convex problems. Besides having a
low per-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM
improves the convergence rate on general convex problems from O(1/T) to
O(1/T^2). Our experimental results show the effectiveness of ASVRG-ADMM.Comment: 16 pages, 5 figures, Appears in Proceedings of the 31th AAAI
Conference on Artificial Intelligence (AAAI), San Francisco, California, USA,
pp. 2287--2293, 201
Theoretical Study of Pressure Broadening of Lithium Resonance Lines by Helium Atoms
Quantum mechanical calculations are performed of the emission and absorption
profiles of the lithium 2s-2p resonance line under the influence of a helium
perturbing gas. We use carefully constructed potential energy surfaces and
transition dipole moments to compute the emission and absorption coefficients
at temperatures from 200 to 3000 K at wavelengths between 500 nm and 1000 nm.
Contributions from quasi-bound states are included. The resulting red and blue
wing profiles are compared with previous theoretical calculations and with an
experiment, carried out at a temperature of 670 K.Comment: 10 figure
- …