218 research outputs found
Influence Maximization Meets Efficiency and Effectiveness: A Hop-Based Approach
Influence Maximization is an extensively-studied problem that targets at
selecting a set of initial seed nodes in the Online Social Networks (OSNs) to
spread the influence as widely as possible. However, it remains an open
challenge to design fast and accurate algorithms to find solutions in
large-scale OSNs. Prior Monte-Carlo-simulation-based methods are slow and not
scalable, while other heuristic algorithms do not have any theoretical
guarantee and they have been shown to produce poor solutions for quite some
cases. In this paper, we propose hop-based algorithms that can easily scale to
millions of nodes and billions of edges. Unlike previous heuristics, our
proposed hop-based approaches can provide certain theoretical guarantees.
Experimental evaluations with real OSN datasets demonstrate the efficiency and
effectiveness of our algorithms.Comment: Extended version of the conference paper at ASONAM 2017, 11 page
Towards Profit Maximization for Online Social Network Providers
Online Social Networks (OSNs) attract billions of users to share information
and communicate where viral marketing has emerged as a new way to promote the
sales of products. An OSN provider is often hired by an advertiser to conduct
viral marketing campaigns. The OSN provider generates revenue from the
commission paid by the advertiser which is determined by the spread of its
product information. Meanwhile, to propagate influence, the activities
performed by users such as viewing video ads normally induce diffusion cost to
the OSN provider. In this paper, we aim to find a seed set to optimize a new
profit metric that combines the benefit of influence spread with the cost of
influence propagation for the OSN provider. Under many diffusion models, our
profit metric is the difference between two submodular functions which is
challenging to optimize as it is neither submodular nor monotone. We design a
general two-phase framework to select seeds for profit maximization and develop
several bounds to measure the quality of the seed set constructed. Experimental
results with real OSN datasets show that our approach can achieve high
approximation guarantees and significantly outperform the baseline algorithms,
including state-of-the-art influence maximization algorithms.Comment: INFOCOM 2018 (Full version), 12 page
Parallel and distributed algorithms
International audienceWe introduce the papers submitted to the special issue of Computation, Concurrency: Practice and Experience on parallel and distributed algorithms
Predicting the Local Response of Metastatic Brain Tumor to Gamma Knife Radiosurgery by Radiomics With a Machine Learning Method
Purpose: The current study proposed a model to predict the response of brain metastases (BMs) treated by Gamma knife radiosurgery (GKRS) using a machine learning (ML) method with radiomics features. The model can be used as a decision tool by clinicians for the most desirable treatment outcome.
Methods and Material: Using MR image data taken by a FLASH (3D fast, low-angle shot) scanning protocol with gadolinium (Gd) contrast-enhanced T1-weighting, the local response (LR) of 157 metastatic brain tumors was categorized into two groups (Group I: responder and Group II: non-responder). We performed a radiomics analysis of those tumors, resulting in more than 700 features. To build a machine learning model, first, we used the least absolute shrinkage and selection operator (LASSO) regression to reduce the number of radiomics features to the minimum number of features useful for the prediction. Then, a prediction model was constructed by using a neural network (NN) classifier with 10 hidden layers and rectified linear unit activation. The training model was evaluated with five-fold cross-validation. For the final evaluation, the NN model was applied to a set of data not used for model creation. The accuracy and sensitivity and the area under the receiver operating characteristic curve (AUC) of the prediction model of LR were analyzed. The performance of the ML model was compared with a visual evaluation method, for which the LR of tumors was predicted by examining the image enhancement pattern of the tumor on MR images.
Results: By the LASSO analysis of the training data, we found seven radiomics features useful for the classification. The accuracy and sensitivity of the visual evaluation method were 44 and 54%. On the other hand, the accuracy and sensitivity of the proposed NN model were 78 and 87%, and the AUC was 0.87.
Conclusions: The proposed NN model using the radiomics features can help physicians to gain a more realistic expectation of the treatment outcome than the traditional method.The portions of the current study were presented as an e-poster at the 19th Leksell Gamma Knife Society Meeting, Dubai, UAE, March 4–8, 2018, and as a short oral talk at the 2019 ASTRO Annual Meeting, Chicago, IL, September 15–18, 2019
Efficient Approximation Algorithms for Adaptive Seed Minimization
As a dual problem of influence maximization, the seed minimization problem
asks for the minimum number of seed nodes to influence a required number
of users in a given social network . Existing algorithms for seed
minimization mostly consider the non-adaptive setting, where all seed nodes are
selected in one batch without observing how they may influence other users. In
this paper, we study seed minimization in the adaptive setting, where the seed
nodes are selected in several batches, such that the choice of a batch may
exploit information about the actual influence of the previous batches. We
propose a novel algorithm, ASTI, which addresses the adaptive seed minimization
problem in expected
time and offers an approximation guarantee of in expectation, where is the
targeted number of influenced nodes, is size of each seed node batch, and
is a user-specified parameter. To the best of our
knowledge, ASTI is the first algorithm that provides such an approximation
guarantee without incurring prohibitive computation overhead. With extensive
experiments on a variety of datasets, we demonstrate the effectiveness and
efficiency of ASTI over competing methods.Comment: A short version of the paper appeared in 2019 International
Conference on Management of Data (SIGMOD '19), June 30--July 5, 2019,
Amsterdam, Netherlands. ACM, New York, NY, USA, 18 page
Interactivity-Constrained Server Provisioning in Large-Scale Distributed Virtual Environments
Maintaining interactivity is one of the key challenges in distributed virtual environments (DVEs). In this paper, we consider a new problem, termed the interactivity-constrained server provisioning problem, whose goal is to minimize the number of distributed servers needed to achieve a prespecified level of interactivity. We identify and formulate two variants of this new problem and show that they are both NP-hard via reductions to the set covering problem. We then propose several computationally efficient approximation algorithms for solving the problem. The main algorithms exploit dependencies among distributed servers to make provisioning decisions. We conduct extensive experiments to evaluate the performance of the proposed algorithms. Specifically, we use both static Internet latency data available from prior measurements and topology generators, as well as the most recent, dynamic latency data collected via our own large-scale deployment of a DVE performance monitoring system over PlanetLab. The results show that the newly proposed algorithms that take into account interserver dependencies significantly outperform the well-established set covering algorithm for both problem variants
- …