21,877 research outputs found
Risks of Friendships on Social Networks
In this paper, we explore the risks of friends in social networks caused by
their friendship patterns, by using real life social network data and starting
from a previously defined risk model. Particularly, we observe that risks of
friendships can be mined by analyzing users' attitude towards friends of
friends. This allows us to give new insights into friendship and risk dynamics
on social networks.Comment: 10 pages, 8 figures, 3 tables. To Appear in the 2012 IEEE
International Conference on Data Mining (ICDM
Operationalizing Individual Fairness with Pairwise Fair Representations
We revisit the notion of individual fairness proposed by Dwork et al. A
central challenge in operationalizing their approach is the difficulty in
eliciting a human specification of a similarity metric. In this paper, we
propose an operationalization of individual fairness that does not rely on a
human specification of a distance metric. Instead, we propose novel approaches
to elicit and leverage side-information on equally deserving individuals to
counter subordination between social groups. We model this knowledge as a
fairness graph, and learn a unified Pairwise Fair Representation (PFR) of the
data that captures both data-driven similarity between individuals and the
pairwise side-information in fairness graph. We elicit fairness judgments from
a variety of sources, including human judgments for two real-world datasets on
recidivism prediction (COMPAS) and violent neighborhood prediction (Crime &
Communities). Our experiments show that the PFR model for operationalizing
individual fairness is practically viable.Comment: To be published in the proceedings of the VLDB Endowment, Vol. 13,
Issue.
Web Site Personalization based on Link Analysis and Navigational Patterns
The continuous growth in the size and use of the World Wide Web imposes new methods of design and development of on-line information services. The need for predicting the users’ needs in order to improve the usability and user retention of a web site is more than evident and can be addressed by personalizing it. Recommendation algorithms aim at proposing “next” pages to users based on their current visit and the past users’ navigational patterns. In the vast majority of related algorithms, however, only the usage data are used to produce recommendations, disregarding the structural properties of the web graph. Thus important – in terms of PageRank authority score – pages may be underrated. In this work we present UPR, a PageRank-style algorithm which combines usage data and link analysis techniques for assigning probabilities to the web pages based on their importance in the web site’s navigational graph. We propose the application of a localized version of UPR (l-UPR) to personalized navigational sub-graphs for online web page ranking and recommendation. Moreover, we propose a hybrid probabilistic predictive model based on Markov models and link analysis for assigning prior probabilities in a hybrid probabilistic model. We prove, through experimentation, that this approach results in more objective and representative predictions than the ones produced from the pure usage-based approaches
Graph Summarization
The continuous and rapid growth of highly interconnected datasets, which are
both voluminous and complex, calls for the development of adequate processing
and analytical techniques. One method for condensing and simplifying such
datasets is graph summarization. It denotes a series of application-specific
algorithms designed to transform graphs into more compact representations while
preserving structural patterns, query answers, or specific property
distributions. As this problem is common to several areas studying graph
topologies, different approaches, such as clustering, compression, sampling, or
influence detection, have been proposed, primarily based on statistical and
optimization methods. The focus of our chapter is to pinpoint the main graph
summarization methods, but especially to focus on the most recent approaches
and novel research trends on this topic, not yet covered by previous surveys.Comment: To appear in the Encyclopedia of Big Data Technologie
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams
Existing FNNs are mostly developed under a shallow network configuration
having lower generalization power than those of deep structures. This paper
proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be
automatically extracted from data streams or removed if they play limited role
during their lifespan. The structure of the network can be deepened on demand
by stacking additional layers using a drift detection method which not only
detects the covariate drift, variations of input space, but also accurately
identifies the real drift, dynamic changes of both feature space and target
space. DEVFNN is developed under the stacked generalization principle via the
feature augmentation concept where a recently developed algorithm, namely
gClass, drives the hidden layer. It is equipped by an automatic feature
selection method which controls activation and deactivation of input attributes
to induce varying subsets of input features. A deep network simplification
procedure is put forward using the concept of hidden layer merging to prevent
uncontrollable growth of dimensionality of input space due to the nature of
feature augmentation approach in building a deep network structure. DEVFNN
works in the sample-wise fashion and is compatible for data stream
applications. The efficacy of DEVFNN has been thoroughly evaluated using seven
datasets with non-stationary properties under the prequential test-then-train
protocol. It has been compared with four popular continual learning algorithms
and its shallow counterpart where DEVFNN demonstrates improvement of
classification accuracy. Moreover, it is also shown that the concept drift
detection method is an effective tool to control the depth of network structure
while the hidden layer merging scenario is capable of simplifying the network
complexity of a deep network with negligible compromise of generalization
performance.Comment: This paper has been published in IEEE Transactions on Fuzzy System
LinkCluE: A MATLAB Package for Link-Based Cluster Ensembles
Cluster ensembles have emerged as a powerful meta-learning paradigm that provides improved accuracy and robustness by aggregating several input data clusterings. In particular, link-based similarity methods have recently been introduced with superior performance to the conventional co-association approach. This paper presents a MATLAB package, LinkCluE, that implements the link-based cluster ensemble framework. A variety of functional methods for evaluating clustering results, based on both internal and external criteria, are also provided. Additionally, the underlying algorithms together with the sample uses of the package with interesting real and synthetic datasets are demonstrated herein.
VFCFinder: Seamlessly Pairing Security Advisories and Patches
Security advisories are the primary channel of communication for discovered
vulnerabilities in open-source software, but they often lack crucial
information. Specifically, 63% of vulnerability database reports are missing
their patch links, also referred to as vulnerability fixing commits (VFCs).
This paper introduces VFCFinder, a tool that generates the top-five ranked set
of VFCs for a given security advisory using Natural Language Programming
Language (NL-PL) models. VFCFinder yields a 96.6% recall for finding the
correct VFC within the Top-5 commits, and an 80.0% recall for the Top-1 ranked
commit. VFCFinder generalizes to nine different programming languages and
outperforms state-of-the-art approaches by 36 percentage points in terms of
Top-1 recall. As a practical contribution, we used VFCFinder to backfill over
300 missing VFCs in the GitHub Security Advisory (GHSA) database. All of the
VFCs were accepted and merged into the GHSA database. In addition to
demonstrating a practical pairing of security advisories to VFCs, our general
open-source implementation will allow vulnerability database maintainers to
drastically improve data quality, supporting efforts to secure the software
supply chain
- …