20,618 research outputs found
Group-In: Group Inference from Wireless Traces of Mobile Devices
This paper proposes Group-In, a wireless scanning system to detect static or
mobile people groups in indoor or outdoor environments. Group-In collects only
wireless traces from the Bluetooth-enabled mobile devices for group inference.
The key problem addressed in this work is to detect not only static groups but
also moving groups with a multi-phased approach based only noisy wireless
Received Signal Strength Indicator (RSSIs) observed by multiple wireless
scanners without localization support. We propose new centralized and
decentralized schemes to process the sparse and noisy wireless data, and
leverage graph-based clustering techniques for group detection from short-term
and long-term aspects. Group-In provides two outcomes: 1) group detection in
short time intervals such as two minutes and 2) long-term linkages such as a
month. To verify the performance, we conduct two experimental studies. One
consists of 27 controlled scenarios in the lab environments. The other is a
real-world scenario where we place Bluetooth scanners in an office environment,
and employees carry beacons for more than one month. Both the controlled and
real-world experiments result in high accuracy group detection in short time
intervals and sampling liberties in terms of the Jaccard index and pairwise
similarity coefficient.Comment: This work has been funded by the EU Horizon 2020 Programme under
Grant Agreements No. 731993 AUTOPILOT and No.871249 LOCUS projects. The
content of this paper does not reflect the official opinion of the EU.
Responsibility for the information and views expressed therein lies entirely
with the authors. Proc. of ACM/IEEE IPSN'20, 202
Edge-enhancing Filters with Negative Weights
In [DOI:10.1109/ICMEW.2014.6890711], a graph-based denoising is performed by
projecting the noisy image to a lower dimensional Krylov subspace of the graph
Laplacian, constructed using nonnegative weights determined by distances
between image data corresponding to image pixels. We~extend the construction of
the graph Laplacian to the case, where some graph weights can be negative.
Removing the positivity constraint provides a more accurate inference of a
graph model behind the data, and thus can improve quality of filters for
graph-based signal processing, e.g., denoising, compared to the standard
construction, without affecting the costs.Comment: 5 pages; 6 figures. Accepted to IEEE GlobalSIP 2015 conferenc
Learning multifractal structure in large networks
Generating random graphs to model networks has a rich history. In this paper,
we analyze and improve upon the multifractal network generator (MFNG)
introduced by Palla et al. We provide a new result on the probability of
subgraphs existing in graphs generated with MFNG. From this result it follows
that we can quickly compute moments of an important set of graph properties,
such as the expected number of edges, stars, and cliques. Specifically, we show
how to compute these moments in time complexity independent of the size of the
graph and the number of recursive levels in the generative model. We leverage
this theory to a new method of moments algorithm for fitting large networks to
MFNG. Empirically, this new approach effectively simulates properties of
several social and information networks. In terms of matching subgraph counts,
our method outperforms similar algorithms used with the Stochastic Kronecker
Graph model. Furthermore, we present a fast approximation algorithm to generate
graph instances following the multi- fractal structure. The approximation
scheme is an improvement over previous methods, which ran in time complexity
quadratic in the number of vertices. Combined, our method of moments and fast
sampling scheme provide the first scalable framework for effectively modeling
large networks with MFNG
A Convolutional Neural Network model based on Neutrosophy for Noisy Speech Recognition
Convolutional neural networks are sensitive to unknown noisy condition in the
test phase and so their performance degrades for the noisy data classification
task including noisy speech recognition. In this research, a new convolutional
neural network (CNN) model with data uncertainty handling; referred as NCNN
(Neutrosophic Convolutional Neural Network); is proposed for classification
task. Here, speech signals are used as input data and their noise is modeled as
uncertainty. In this task, using speech spectrogram, a definition of
uncertainty is proposed in neutrosophic (NS) domain. Uncertainty is computed
for each Time-frequency point of speech spectrogram as like a pixel. Therefore,
uncertainty matrix with the same size of spectrogram is created in NS domain.
In the next step, a two parallel paths CNN classification model is proposed.
Speech spectrogram is used as input of the first path and uncertainty matrix
for the second path. The outputs of two paths are combined to compute the final
output of the classifier. To show the effectiveness of the proposed method, it
has been compared with conventional CNN on the isolated words of Aurora2
dataset. The proposed method achieves the average accuracy of 85.96 in noisy
train data. It is more robust against Car, Airport and Subway noises with
accuracies 90, 88 and 81 in test sets A, B and C, respectively. Results show
that the proposed method outperforms conventional CNN with the improvement of
6, 5 and 2 percentage in test set A, test set B and test sets C, respectively.
It means that the proposed method is more robust against noisy data and handle
these data effectively.Comment: International conference on Pattern Recognition and Image Analysis
(IPRIA 2019
LexRank: Graph-based Lexical Centrality as Salience in Text Summarization
We introduce a stochastic graph-based method for computing relative
importance of textual units for Natural Language Processing. We test the
technique on the problem of Text Summarization (TS). Extractive TS relies on
the concept of sentence salience to identify the most important sentences in a
document or set of documents. Salience is typically defined in terms of the
presence of particular important words or in terms of similarity to a centroid
pseudo-sentence. We consider a new approach, LexRank, for computing sentence
importance based on the concept of eigenvector centrality in a graph
representation of sentences. In this model, a connectivity matrix based on
intra-sentence cosine similarity is used as the adjacency matrix of the graph
representation of sentences. Our system, based on LexRank ranked in first place
in more than one task in the recent DUC 2004 evaluation. In this paper we
present a detailed analysis of our approach and apply it to a larger data set
including data from earlier DUC evaluations. We discuss several methods to
compute centrality using the similarity graph. The results show that
degree-based methods (including LexRank) outperform both centroid-based methods
and other systems participating in DUC in most of the cases. Furthermore, the
LexRank with threshold method outperforms the other degree-based techniques
including continuous LexRank. We also show that our approach is quite
insensitive to the noise in the data that may result from an imperfect topical
clustering of documents
Laplacian Mixture Modeling for Network Analysis and Unsupervised Learning on Graphs
Laplacian mixture models identify overlapping regions of influence in
unlabeled graph and network data in a scalable and computationally efficient
way, yielding useful low-dimensional representations. By combining Laplacian
eigenspace and finite mixture modeling methods, they provide probabilistic or
fuzzy dimensionality reductions or domain decompositions for a variety of input
data types, including mixture distributions, feature vectors, and graphs or
networks. Provable optimal recovery using the algorithm is analytically shown
for a nontrivial class of cluster graphs. Heuristic approximations for scalable
high-performance implementations are described and empirically tested.
Connections to PageRank and community detection in network analysis demonstrate
the wide applicability of this approach. The origins of fuzzy spectral methods,
beginning with generalized heat or diffusion equations in physics, are reviewed
and summarized. Comparisons to other dimensionality reduction and clustering
methods for challenging unsupervised machine learning problems are also
discussed.Comment: 13 figures, 35 reference
- …