8,612 research outputs found
Storytelling Security: User-Intention Based Traffic Sanitization
Malicious software (malware) with decentralized communication infrastructure, such as peer-to-peer botnets, is difficult to detect. In this paper, we describe a traffic-sanitization method for identifying malware-triggered outbound connections from a personal computer. Our solution correlates user activities with the content of outbound traffic. Our key observation is that user-initiated outbound traffic typically has corresponding human inputs, i.e., keystroke or mouse clicks. Our analysis on the causal relations between user inputs and packet payload enables the efficient enforcement of the inter-packet dependency at the application level.
We formalize our approach within the framework of protocol-state machine. We define new application-level traffic-sanitization policies that enforce the inter-packet dependencies. The dependency is derived from the transitions among protocol states that involve both user actions and network events. We refer to our methodology as storytelling security.
We demonstrate a concrete realization of our methodology in the context of peer-to-peer file-sharing application, describe its use in blocking traffic of P2P bots on a host. We implement and evaluate our prototype in Windows operating system in both online and offline deployment settings. Our experimental evaluation along with case studies of real-world P2P applications demonstrates the feasibility of verifying the inter-packet dependencies. Our deep packet inspection incurs overhead on the outbound network flow. Our solution can also be used as an offline collect-and-analyze tool
Energy-saving Resource Allocation by Exploiting the Context Information
Improving energy efficiency of wireless systems by exploiting the context
information has received attention recently as the smart phone market keeps
expanding. In this paper, we devise energy-saving resource allocation policy
for multiple base stations serving non-real-time traffic by exploiting three
levels of context information, where the background traffic is assumed to
occupy partial resources. Based on the solution from a total energy
minimization problem with perfect future information,a context-aware BS
sleeping, scheduling and power allocation policy is proposed by estimating the
required future information with three levels of context information.
Simulation results show that our policy provides significant gains over those
without exploiting any context information. Moreover, it is seen that different
levels of context information play different roles in saving energy and
reducing outage in transmission.Comment: To be presented at IEEE PIMRC 2015, Hong Kong. This work was
supported by National Natural Science Foundation of China under Grant
61120106002 and National Basic Research Program of China under Grant
2012CB31600
Recommended from our members
Prediction of local particle pollution level based on artificial neural network
Citizens eager to know the local pollution level to prevent from air pollution. The real-time measurement for everywhere is a very expensive way, a statistical model based on artificial neural network is applied in this research. This model can estimate particle pollution level with some influencing factors, including background pollution level, weather conditions, urban morphology and local pollution sources. The monitoring from regulatory monitoring sites is considered as the background level. The field measurements of 20 locations are conducted to feed the output layer of ANN model. The average relative error of prediction compared with measurement is 9.24% for PM10 and 18.90% for PM2.5
Analysis of Crowdsourced Sampling Strategies for HodgeRank with Sparse Random Graphs
Crowdsourcing platforms are now extensively used for conducting subjective
pairwise comparison studies. In this setting, a pairwise comparison dataset is
typically gathered via random sampling, either \emph{with} or \emph{without}
replacement. In this paper, we use tools from random graph theory to analyze
these two random sampling methods for the HodgeRank estimator. Using the
Fiedler value of the graph as a measurement for estimator stability
(informativeness), we provide a new estimate of the Fiedler value for these two
random graph models. In the asymptotic limit as the number of vertices tends to
infinity, we prove the validity of the estimate. Based on our findings, for a
small number of items to be compared, we recommend a two-stage sampling
strategy where a greedy sampling method is used initially and random sampling
\emph{without} replacement is used in the second stage. When a large number of
items is to be compared, we recommend random sampling with replacement as this
is computationally inexpensive and trivially parallelizable. Experiments on
synthetic and real-world datasets support our analysis
HodgeRank with Information Maximization for Crowdsourced Pairwise Ranking Aggregation
Recently, crowdsourcing has emerged as an effective paradigm for
human-powered large scale problem solving in various domains. However, task
requester usually has a limited amount of budget, thus it is desirable to have
a policy to wisely allocate the budget to achieve better quality. In this
paper, we study the principle of information maximization for active sampling
strategies in the framework of HodgeRank, an approach based on Hodge
Decomposition of pairwise ranking data with multiple workers. The principle
exhibits two scenarios of active sampling: Fisher information maximization that
leads to unsupervised sampling based on a sequential maximization of graph
algebraic connectivity without considering labels; and Bayesian information
maximization that selects samples with the largest information gain from prior
to posterior, which gives a supervised sampling involving the labels collected.
Experiments show that the proposed methods boost the sampling efficiency as
compared to traditional sampling schemes and are thus valuable to practical
crowdsourcing experiments.Comment: Accepted by AAAI201
Sparse Recovery via Differential Inclusions
In this paper, we recover sparse signals from their noisy linear measurements
by solving nonlinear differential inclusions, which is based on the notion of
inverse scale space (ISS) developed in applied mathematics. Our goal here is to
bring this idea to address a challenging problem in statistics, \emph{i.e.}
finding the oracle estimator which is unbiased and sign-consistent using
dynamics. We call our dynamics \emph{Bregman ISS} and \emph{Linearized Bregman
ISS}. A well-known shortcoming of LASSO and any convex regularization
approaches lies in the bias of estimators. However, we show that under proper
conditions, there exists a bias-free and sign-consistent point on the solution
paths of such dynamics, which corresponds to a signal that is the unbiased
estimate of the true signal and whose entries have the same signs as those of
the true signs, \emph{i.e.} the oracle estimator. Therefore, their solution
paths are regularization paths better than the LASSO regularization path, since
the points on the latter path are biased when sign-consistency is reached. We
also show how to efficiently compute their solution paths in both continuous
and discretized settings: the full solution paths can be exactly computed piece
by piece, and a discretization leads to \emph{Linearized Bregman iteration},
which is a simple iterative thresholding rule and easy to parallelize.
Theoretical guarantees such as sign-consistency and minimax optimal -error
bounds are established in both continuous and discrete settings for specific
points on the paths. Early-stopping rules for identifying these points are
given. The key treatment relies on the development of differential inequalities
for differential inclusions and their discretizations, which extends the
previous results and leads to exponentially fast recovering of sparse signals
before selecting wrong ones.Comment: In Applied and Computational Harmonic Analysis, 201
- …