12,967 research outputs found
Non-power-of-Two FFTs: Exploring the Flexibility of the Montium TP
Coarse-grain reconfigurable architectures, like the Montium TP, have proven to be a very successful approach for low-power and high-performance computation of regular digital signal processing algorithms. This paper presents the implementation of a class of non-power-of-two FFTs to discover the limitations and Flexibility of the Montium TP for less regular algorithms. A non-power-of-two FFT is less regular compared to a traditional power-of-two FFT. The results of the implementation show the processing time, accuracy, energy consumption and Flexibility of the implementation
Network Sampling: From Static to Streaming Graphs
Network sampling is integral to the analysis of social, information, and
biological networks. Since many real-world networks are massive in size,
continuously evolving, and/or distributed in nature, the network structure is
often sampled in order to facilitate study. For these reasons, a more thorough
and complete understanding of network sampling is critical to support the field
of network science. In this paper, we outline a framework for the general
problem of network sampling, by highlighting the different objectives,
population and units of interest, and classes of network sampling methods. In
addition, we propose a spectrum of computational models for network sampling
methods, ranging from the traditionally studied model based on the assumption
of a static domain to a more challenging model that is appropriate for
streaming domains. We design a family of sampling methods based on the concept
of graph induction that generalize across the full spectrum of computational
models (from static to streaming) while efficiently preserving many of the
topological properties of the input graphs. Furthermore, we demonstrate how
traditional static sampling algorithms can be modified for graph streams for
each of the three main classes of sampling methods: node, edge, and
topology-based sampling. Our experimental results indicate that our proposed
family of sampling methods more accurately preserves the underlying properties
of the graph for both static and streaming graphs. Finally, we study the impact
of network sampling algorithms on the parameter estimation and performance
evaluation of relational classification algorithms
Fully Dynamic Algorithm for Top- Densest Subgraphs
Given a large graph, the densest-subgraph problem asks to find a subgraph
with maximum average degree. When considering the top- version of this
problem, a na\"ive solution is to iteratively find the densest subgraph and
remove it in each iteration. However, such a solution is impractical due to
high processing cost. The problem is further complicated when dealing with
dynamic graphs, since adding or removing an edge requires re-running the
algorithm. In this paper, we study the top- densest-subgraph problem in the
sliding-window model and propose an efficient fully-dynamic algorithm. The
input of our algorithm consists of an edge stream, and the goal is to find the
node-disjoint subgraphs that maximize the sum of their densities. In contrast
to existing state-of-the-art solutions that require iterating over the entire
graph upon any update, our algorithm profits from the observation that updates
only affect a limited region of the graph. Therefore, the top- densest
subgraphs are maintained by only applying local updates. We provide a
theoretical analysis of the proposed algorithm and show empirically that the
algorithm often generates denser subgraphs than state-of-the-art competitors.
Experiments show an improvement in efficiency of up to five orders of magnitude
compared to state-of-the-art solutions.Comment: 10 pages, 8 figures, accepted at CIKM 201
Embedding Principal Component Analysis for Data Reductionin Structural Health Monitoring on Low-Cost IoT Gateways
Principal component analysis (PCA) is a powerful data reductionmethod for
Structural Health Monitoring. However, its computa-tional cost and data memory
footprint pose a significant challengewhen PCA has to run on limited capability
embedded platformsin low-cost IoT gateways. This paper presents a
memory-efficientparallel implementation of the streaming History PCA
algorithm.On our dataset, it achieves 10x compression factor and 59x
memoryreduction with less than 0.15 dB degradation in the
reconstructedsignal-to-noise ratio (RSNR) compared to standard PCA. More-over,
the algorithm benefits from parallelization on multiple cores,achieving a
maximum speedup of 4.8x on Samsung ARTIK 710
Space- and Time-Efficient Algorithm for Maintaining Dense Subgraphs on One-Pass Dynamic Streams
While in many graph mining applications it is crucial to handle a stream of
updates efficiently in terms of {\em both} time and space, not much was known
about achieving such type of algorithm. In this paper we study this issue for a
problem which lies at the core of many graph mining applications called {\em
densest subgraph problem}. We develop an algorithm that achieves time- and
space-efficiency for this problem simultaneously. It is one of the first of its
kind for graph problems to the best of our knowledge.
In a graph , the "density" of a subgraph induced by a subset of
nodes is defined as , where is the set of
edges in with both endpoints in . In the densest subgraph problem, the
goal is to find a subset of nodes that maximizes the density of the
corresponding induced subgraph. For any , we present a dynamic
algorithm that, with high probability, maintains a -approximation
to the densest subgraph problem under a sequence of edge insertions and
deletions in a graph with nodes. It uses space, and has an
amortized update time of and a query time of . Here,
hides a O(\poly\log_{1+\epsilon} n) term. The approximation ratio
can be improved to at the cost of increasing the query time to
. It can be extended to a -approximation
sublinear-time algorithm and a distributed-streaming algorithm. Our algorithm
is the first streaming algorithm that can maintain the densest subgraph in {\em
one pass}. The previously best algorithm in this setting required
passes [Bahmani, Kumar and Vassilvitskii, VLDB'12]. The space required by our
algorithm is tight up to a polylogarithmic factor.Comment: A preliminary version of this paper appeared in STOC 201
Compressed Online Dictionary Learning for Fast fMRI Decomposition
We present a method for fast resting-state fMRI spatial decomposi-tions of
very large datasets, based on the reduction of the temporal dimension before
applying dictionary learning on concatenated individual records from groups of
subjects. Introducing a measure of correspondence between spatial
decompositions of rest fMRI, we demonstrates that time-reduced dictionary
learning produces result as reliable as non-reduced decompositions. We also
show that this reduction significantly improves computational scalability
- …