78 research outputs found
Analysis and evaluation of SafeDroid v2.0, a framework for detecting malicious Android applications
Android smartphones have become a vital component of the daily routine of millions of people, running a plethora of applications available in the official and alternative marketplaces. Although there are many security mechanisms to scan and filter malicious applications, malware is still able to reach the devices of many end-users. In this paper, we introduce the SafeDroid v2.0 framework, that is a flexible, robust, and versatile open-source solution for statically analysing Android applications, based on machine learning techniques. The main goal of our work, besides the automated production of fully sufficient prediction and classification models in terms of maximum accuracy scores and minimum negative errors, is to offer an out-of-the-box framework that can be employed by the Android security researchers to efficiently experiment to find effective solutions: the SafeDroid v2.0 framework makes it possible to test many different combinations of machine learning classifiers, with a high degree of freedom and flexibility in the choice of features to consider, such as dataset balance and dataset selection. The framework also provides a server, for generating experiment reports, and an Android application, for the verification of the produced models in real-life scenarios. An extensive campaign of experiments is also presented to show how it is possible to efficiently find competitive solutions: the results of our experiments confirm that SafeDroid v2.0 can reach very good performances, even with highly unbalanced dataset inputs and always with a very limited overhead
Twitlang(er): interactions modeling language (and interpreter) for Twitter
Online social networks are widespread means to enact interactive collaboration among people by, e.g., planning events, diffusing information, and enabling discussions. Twitter provides one of the most illustrative example of how people can effectively interact without resorting to traditional communication media. For example, the platform has acted as a unique medium for reliable communication in emergency or for organizing cooperative mass actions. This use of Twitter in a cooperative, possibly critical, setting calls for a more precise awareness of the dynamics regulating message spreading. To this aim, we designed?Twitlang, a formal language to model interactions among Twitter accounts. The operational semantics associated to the language allows users to clearly and precisely determine the effects of actions performed by Twitter accounts, such as post, retweet, reply to or delete tweets. The language has been implemented in the form of a?Maude?interpreter,?Twitlanger, which takes a language term as an input and, automatically or interactively, explores the computations arising from the term. By relying on this interpreter, automatic verification of communication properties of Twitter accounts can be carried out via the analysis tools provided by the?Maudeframework
Better Safe Than Sorry: An Adversarial Approach to Improve Social Bot Detection
The arm race between spambots and spambot-detectors is made of several cycles
(or generations): a new wave of spambots is created (and new spam is spread),
new spambot filters are derived and old spambots mutate (or evolve) to new
species. Recently, with the diffusion of the adversarial learning approach, a
new practice is emerging: to manipulate on purpose target samples in order to
make stronger detection models. Here, we manipulate generations of Twitter
social bots, to obtain - and study - their possible future evolutions, with the
aim of eventually deriving more effective detection techniques. In detail, we
propose and experiment with a novel genetic algorithm for the synthesis of
online accounts. The algorithm allows to create synthetic evolved versions of
current state-of-the-art social bots. Results demonstrate that synthetic bots
really escape current detection techniques. However, they give all the needed
elements to improve such techniques, making possible a proactive approach for
the design of social bot detection systems.Comment: This is the pre-final version of a paper accepted @ 11th ACM
Conference on Web Science, June 30-July 3, 2019, Boston, U
DDoS-Capable IoT Malwares: comparative analysis and Mirai Investigation
The Internet of Things (IoT) revolution has not only carried the astonishing promise to interconnect a whole generation of traditionally “dumb” devices, but also brought to the Internet the menace of billions of badly protected and easily hackable objects. Not surprisingly, this sudden flooding of fresh and insecure devices fueled older threats, such as Distributed Denial of Service (DDoS) attacks. In this paper, we first propose an updated and comprehensive taxonomy of DDoS attacks, together with a number of examples on how this classification maps to real-world attacks. Then, we outline the current situation of DDoS-enabled malwares in IoT networks, highlighting how recent data support our concerns about the growing in popularity of these malwares. Finally, we give a detailed analysis of the general framework and the operating principles of Mirai, the most disruptive DDoS-capable IoT malware seen so far
Evaluation of Professional Cloud Password Management Tools
Strong passwords have been preached since decades. However, lot of the regular users of IT systems resort to simple and repetitive passwords, especially nowadays in the “service era”. To help alleviate this problem, a new class of software grew popular: password managers. Since their introduction, password managers have slowly been migrating into the cloud. In this paper we review and analyze current professional password managers in the cloud. We discuss several functional and non-functional requirements to evaluate existing solutions and we sum up their strengths and weaknesses. The main conclusion is that a silver bullet solution is not available yet and that this type of tools still deserve a significant research effort from the privacy and security community
A New Model for Testing IPv6 Fragment Handling
Since the origins of the Internet, various vulnerabilities exploiting the IP
fragmentation process have plagued IPv4 protocol, many leading to a wide range
of attacks. IPv6 modified the handling of fragmentations and introduced a
specific extension header, not solving the related problems, as proved by
extensive literature. One of the primary sources of problems has been the
overlapping fragments, which result in unexpected or malicious packets when
reassembled. To overcome the problem related to fragmentation, the authors of
RFC 5722 decided that IPv6 hosts MUST silently drop overlapping fragments.
Since then, several studies have proposed methodologies to check if IPv6
hosts accept overlapping fragments and are still vulnerable to related attacks.
However, some of the above methodologies have not been proven complete or need
to be more accurate. In this paper we propose a novel model to check IPv6
fragmentation handling specifically suited for the reassembling strategies of
modern operating systems. Previous models, indeed, considered OS reassembly
policy as byte-based. However, nowadays, reassembly policies are
fragment-based, making previous models inadequate. Our model leverages the
commutative property of the checksum, simplifying the whole assessing process.
Starting with this new model, we were able to better evaluate the RFC-5722 and
RFC-9099 compliance of modern operating systems against fragmentation handling.
Our results suggest that IPv6 fragmentation can still be considered a threat
and that more effort is needed to solve related security issues
Scalable and automated Evaluation of Blue Team cyber posture in Cyber Ranges
Cyber ranges are virtual training ranges that have emerged as indispensable
environments for conducting secure exercises and simulating real or
hypothetical scenarios. These complex computational infrastructures enable the
simulation of attacks, facilitating the evaluation of defense tools and
methodologies and developing novel countermeasures against threats. One of the
main challenges of cyber range scalability is the exercise evaluation that
often requires the manual intervention of human operators, the White team. This
paper proposes a novel approach that uses Blue and Red team reports and
well-known databases to automate the evaluation and assessment of the exercise
outcomes, overcoming the limitations of existing assessment models. Our
proposal encompasses evaluating various aspects and metrics, explicitly
emphasizing Blue Teams' actions and strategies and allowing the automated
generation of their cyber posture
The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race
Recent studies in social media spam and automation provide anecdotal
argumentation of the rise of a new generation of spambots, so-called social
spambots. Here, for the first time, we extensively study this novel phenomenon
on Twitter and we provide quantitative evidence that a paradigm-shift exists in
spambot design. First, we measure current Twitter's capabilities of detecting
the new social spambots. Later, we assess the human performance in
discriminating between genuine accounts, social spambots, and traditional
spambots. Then, we benchmark several state-of-the-art techniques proposed by
the academic literature. Results show that neither Twitter, nor humans, nor
cutting-edge applications are currently capable of accurately detecting the new
social spambots. Our results call for new approaches capable of turning the
tide in the fight against this raising phenomenon. We conclude by reviewing the
latest literature on spambots detection and we highlight an emerging common
research trend based on the analysis of collective behaviors. Insights derived
from both our extensive experimental campaign and survey shed light on the most
promising directions of research and lay the foundations for the arms race
against the novel social spambots. Finally, to foster research on this novel
phenomenon, we make publicly available to the scientific community all the
datasets used in this study.Comment: To appear in Proc. 26th WWW, 2017, Companion Volume (Web Science
Track, Perth, Australia, 3-7 April, 2017
DNA-inspired online behavioral modeling and its application to spambot detection
We propose a strikingly novel, simple, and effective approach to model online
user behavior: we extract and analyze digital DNA sequences from user online
actions and we use Twitter as a benchmark to test our proposal. We obtain an
incisive and compact DNA-inspired characterization of user actions. Then, we
apply standard DNA analysis techniques to discriminate between genuine and
spambot accounts on Twitter. An experimental campaign supports our proposal,
showing its effectiveness and viability. To the best of our knowledge, we are
the first ones to identify and adapt DNA-inspired techniques to online user
behavioral modeling. While Twitter spambot detection is a specific use case on
a specific social media, our proposed methodology is platform and technology
agnostic, hence paving the way for diverse behavioral characterization tasks
Domain-specific queries and Web search personalization: some investigations
Major search engines deploy personalized Web results to enhance users'
experience, by showing them data supposed to be relevant to their interests.
Even if this process may bring benefits to users while browsing, it also raises
concerns on the selection of the search results. In particular, users may be
unknowingly trapped by search engines in protective information bubbles, called
"filter bubbles", which can have the undesired effect of separating users from
information that does not fit their preferences. This paper moves from early
results on quantification of personalization over Google search query results.
Inspired by previous works, we have carried out some experiments consisting of
search queries performed by a battery of Google accounts with differently
prepared profiles. Matching query results, we quantify the level of
personalization, according to topics of the queries and the profile of the
accounts. This work reports initial results and it is a first step a for more
extensive investigation to measure Web search personalization.Comment: In Proceedings WWV 2015, arXiv:1508.0338
- …