452 research outputs found
WARDOG: Awareness detection watchbog for Botnet infection on the host device
Botnets constitute nowadays one of the most dangerous security threats worldwide. High volumes of infected
machines are controlled by a malicious entity and perform coordinated cyber-attacks. The problem will become even worse in
the era of the Internet of Things (IoT) as the number of insecure devices is going to be exponentially increased. This paper
presents WARDOG – an awareness and digital forensic system that informs the end-user of the botnet’s infection, exposes the
botnet infrastructure, and captures verifiable data that can be utilized in a court of law. The responsible authority gathers all
information and automatically generates a unitary documentation for the case. The document contains undisputed forensic
information, tracking all involved parties and their role in the attack. The deployed security mechanisms and the overall
administration setting ensures non-repudiation of performed actions and enforces accountability. The provided properties are
verified through theoretic analysis. In simulated environment, the effectiveness of the proposed solution, in mitigating the botnet
operations, is also tested against real attack strategies that have been captured by the FORTHcert honeypots, overcoming
state-of-the-art solutions. Moreover, a preliminary version is implemented in real computers and IoT devices, highlighting the
low computational/communicational overheads of WARDOG in the field
Solving key design issues for massively multiplayer online games on peer-to-peer architectures
Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and
scale on the Internet and are predominantly implemented by Client/Server architectures.
While such a classical approach to distributed system design offers many benefits, it suffers
from significant technical and commercial drawbacks, primarily reliability and scalability
costs. This realisation has sparked recent research interest in adapting MMOGs
to Peer-to-Peer (P2P) architectures.
This thesis identifies six key design issues to be addressed by P2P MMOGs, namely
interest management, event dissemination, task sharing, state persistency, cheating mitigation,
and incentive mechanisms. Design alternatives for each issue are systematically
compared, and their interrelationships discussed. How well representative P2P MMOG
architectures fulfil the design criteria is also evaluated. It is argued that although P2P
MMOG architectures are developing rapidly, their support for task sharing and incentive
mechanisms still need to be improved.
The design of a novel framework for P2P MMOGs, Mediator, is presented. It employs a
self-organising super-peer network over a P2P overlay infrastructure, and addresses the
six design issues in an integrated system. The Mediator framework is extensible, as it
supports flexible policy plug-ins and can accommodate the introduction of new superpeer
roles. Key components of this framework have been implemented and evaluated
with a simulated P2P MMOG.
As the Mediator framework relies on super-peers for computational and administrative
tasks, membership management is crucial, e.g. to allow the system to recover from
super-peer failures. A new technology for this, namely Membership-Aware Multicast
with Bushiness Optimisation (MAMBO), has been designed, implemented and evaluated.
It reuses the communication structure of a tree-based application-level multicast
to track group membership efficiently. Evaluation of a demonstration application shows
i
that MAMBO is able to quickly detect and handle peers joining and leaving. Compared
to a conventional supervision architecture, MAMBO is more scalable, and yet incurs
less communication overheads. Besides MMOGs, MAMBO is suitable for other P2P
applications, such as collaborative computing and multimedia streaming.
This thesis also presents the design, implementation and evaluation of a novel task
mapping infrastructure for heterogeneous P2P environments, Deadline-Driven Auctions
(DDA). DDA is primarily designed to support NPC host allocation in P2P MMOGs, and
specifically in the Mediator framework. However, it can also support the sharing of computational
and interactive tasks with various deadlines in general P2P applications. Experimental
and analytical results demonstrate that DDA efficiently allocates computing
resources for large numbers of real-time NPC tasks in a simulated P2P MMOG with approximately
1000 players. Furthermore, DDA supports gaming interactivity by keeping
the communication latency among NPC hosts and ordinary players low. It also supports
flexible matchmaking policies, and can motivate application participants to contribute
resources to the system
Reliable and timely event notification for publish/subscribe services over the internet
The publish/subscribe paradigm is gaining attention for the development of several applications in wide area networks (WANs) due to its intrinsic time, space, and synchronization decoupling properties that meet the scalability and asynchrony requirements of those applications. However, while the communication in a WAN may be affected by the unpredictable behavior of the network, with messages that can be dropped or delayed, existing publish/subscribe solutions pay just a little attention to addressing these issues. On the contrary, applications such as business intelligence, critical infrastructures, and financial services require delivery guarantees with strict temporal deadlines. In this paper, we propose a framework that enforces both reliability and timeliness for publish/subscribe services over WAN. Specifically, we combine two different approaches: gossiping, to retrieve missing packets in case of incomplete information, and network coding, to reduce the number of retransmissions and, consequently, the latency. We provide an analytical model that describes the information recovery capabilities of our algorithm and a simulation-based study, taking into account a real workload from the Air Traffic Control domain, which evidences how the proposed solution is able to ensure reliable event notification over a WAN within a reasonable bounded time window. © 2013 IEEE
Atomic-SDN: Is Synchronous Flooding the Solution to Software-Defined Networking in IoT?
The adoption of Software Defined Networking (SDN) within traditional networks
has provided operators the ability to manage diverse resources and easily
reconfigure networks as requirements change. Recent research has extended this
concept to IEEE 802.15.4 low-power wireless networks, which form a key
component of the Internet of Things (IoT). However, the multiple traffic
patterns necessary for SDN control makes it difficult to apply this approach to
these highly challenging environments. This paper presents Atomic-SDN, a highly
reliable and low-latency solution for SDN in low-power wireless. Atomic-SDN
introduces a novel Synchronous Flooding (SF) architecture capable of
dynamically configuring SF protocols to satisfy complex SDN control
requirements, and draws from the authors' previous experiences in the IEEE EWSN
Dependability Competition: where SF solutions have consistently outperformed
other entries. Using this approach, Atomic-SDN presents considerable
performance gains over other SDN implementations for low-power IoT networks. We
evaluate Atomic-SDN through simulation and experimentation, and show how
utilizing SF techniques provides latency and reliability guarantees to SDN
control operations as the local mesh scales. We compare Atomic-SDN against
other SDN implementations based on the IEEE 802.15.4 network stack, and
establish that Atomic-SDN improves SDN control by orders-of-magnitude across
latency, reliability, and energy-efficiency metrics
Trade-off among timeliness, messages and accuracy for large-Ssale information management
The increasing amount of data and the number of nodes in large-scale environments
require new techniques for information management. Examples of such environments
are the decentralized infrastructures of Computational Grid and Computational
Cloud applications. These large-scale applications need different kinds
of aggregated information such as resource monitoring, resource discovery or economic
information. The challenge of providing timely and accurate information
in large scale environments arise from the distribution of the information. Reasons
for delays in distributed information system are a long information transmission
time due to the distribution, churn and failures.
A problem of large applications such as peer-to-peer (P2P) systems is the increasing
retrieval time of the information due to the decentralization of the data
and the failure proneness. However, many applications need a timely information
provision. Another problem is an increasing network consumption when the application
scales to millions of users and data. Using approximation techniques allows
reducing the retrieval time and the network consumption. However, the usage of
approximation techniques decreases the accuracy of the results. Thus, the remaining
problem is to offer a trade-off in order to solve the conflicting requirements of
fast information retrieval, accurate results and low messaging cost.
Our goal is to reach a self-adaptive decision mechanism to offer a trade-off
among the retrieval time, the network consumption and the accuracy of the result.
Self-adaption enables distributed software to modify its behavior based on
changes in the operating environment. In large-scale information systems that use
hierarchical data aggregation, we apply self-adaptation to control the approximation
used for the information retrieval and reduces the network consumption and
the retrieval time. The hypothesis of the thesis is that approximation techniquescan reduce the retrieval time and the network consumption while guaranteeing an
accuracy of the results, while considering user’s defined priorities.
First, this presented research addresses the problem of a trade-off among a
timely information retrieval, accurate results and low messaging cost by proposing
a summarization algorithm for resource discovery in P2P-content networks.
After identifying how summarization can improve the discovery process, we propose
an algorithm which uses a precision-recall metric to compare the accuracy
and to offer a user-driven trade-off. Second, we propose an algorithm that applies
a self-adaptive decision making on each node. The decision is about the pruning
of the query and returning the result instead of continuing the query. The pruning
reduces the retrieval time and the network consumption at the cost of a lower accuracy
in contrast to continuing the query. The algorithm uses an analytic hierarchy
process to assess the user’s priorities and to propose a trade-off in order to satisfy
the accuracy requirements with a low message cost and a short delay.
A quantitative analysis evaluates our presented algorithms with a simulator,
which is fed with real data of a network topology and the nodes’ attributes. The
usage of a simulator instead of the prototype allows the evaluation in a large scale
of several thousands of nodes. The algorithm for content summarization is evaluated
with half a million of resources and with different query types. The selfadaptive
algorithm is evaluated with a simulator of several thousands of nodes
that are created from real data. A qualitative analysis addresses the integration
of the simulator’s components in existing market frameworks for Computational
Grid and Cloud applications.
The proposed content summarization algorithm reduces the information retrieval
time from a logarithmic increase to a constant factor. Furthermore, the
message size is reduced significantly by applying the summarization technique.
For the user, a precision-recall metric allows defining the relation between the retrieval
time and the accuracy. The self-adaptive algorithm reduces the number of
messages needed from an exponential increase to a constant factor. At the same
time, the retrieval time is reduced to a constant factor under an increasing number
of nodes. Finally, the algorithm delivers the data with the required accuracy
adjusting the depth of the query according to the network conditions.La gestió de la informació exigeix noves tècniques que tractin amb la creixent
quantitat de dades i nodes en entorns a gran escala. Alguns exemples d’aquests
entorns són les infraestructures descentralitzades de Computacional Grid i Cloud.
Les aplicacions a gran escala necessiten diferents classes d’informació agregada
com monitorització de recursos i informació econòmica. El desafiament de proporcionar
una provisió rà pida i acurada d’informació en ambients de grans escala
sorgeix de la distribució de la informació. Una raó és que el sistema d’informació
ha de tractar amb l’adaptabilitat i fracassos d’aquests ambients.
Un problema amb aplicacions molt grans com en sistemes peer-to-peer (P2P)
és el creixent temps de recuperació de l’informació a causa de la descentralització
de les dades i la facilitat al fracà s. No obstant això, moltes aplicacions necessiten
una provisió d’informació puntual. A més, alguns usuaris i aplicacions accepten
inexactituds dels resultats si la informació es reparteix a temps. A més i més, el
consum de xarxa creixent fa que sorgeixi un altre problema per l’escalabilitat del
sistema. La utilització de tècniques d’aproximació permet reduir el temps de recuperació
i el consum de xarxa. No obstant això, l’ús de tècniques d’aproximació
disminueix la precisió dels resultats. AixÃ, el problema restant és oferir un compromÃs
per resoldre els requisits en conflicte d’extracció de la informació rà pida,
resultats acurats i cost d’enviament baix.
El nostre objectiu és obtenir un mecanisme de decisió completament autoadaptatiu
per tal d’oferir el compromÃs entre temps de recuperació, consum de
xarxa i precisió del resultat. AutoadaptacÃo permet al programari distribuït modificar
el seu comportament en funció dels canvis a l’entorn d’operació. En sistemes
d’informació de gran escala que utilitzen agregació de dades jerà rquica,
l’auto-adaptació permet controlar l’aproximació utilitzada per a l’extracció de la informació i redueixen el consum de xarxa i el temps de recuperació. La hipòtesi
principal d’aquesta tesi és que els tècniques d’aproximació permeten reduir el
temps de recuperació i el consum de xarxa mentre es garanteix una precisió adequada
definida per l’usari.
La recerca que es presenta, introdueix un algoritme de sumarització de continguts
per a la descoberta de recursos a xarxes de contingut P2P. Després d’identificar
com sumarització pot millorar el procés de descoberta, proposem una mètrica que
s’utilitza per comparar la precisió i oferir un compromÃs definit per l’usuari. Després,
introduïm un algoritme nou que aplica l’auto-adaptació a un ordre per satisfer
els requisits de precisió amb un cost de missatge baix i un retard curt. Basat
en les prioritats d’usuari, l’algoritme troba automà ticament un compromÃs.
L’anà lisi quantitativa avalua els algoritmes presentats amb un simulador per
permetre l’evacuació d’uns quants milers de nodes. El simulador s’alimenta amb
dades d’una topologia de xarxa i uns atributs dels nodes reals. L’algoritme de
sumarització de contingut s’avalua amb mig milió de recursos i amb diferents
tipus de sol·licituds. L’anà lisi qualitativa avalua la integració del components del
simulador en estructures de mercat existents per a aplicacions de Computacional
Grid i Cloud. AixÃ, la funcionalitat implementada del simulador (com el procés
d’agregació i la query language) és comprovada per la integració de prototips.
L’algoritme de sumarització de contingut proposat redueix el temps d’extracció
de l’informació d’un augment logarÃtmic a un factor constant. A més, també permet
que la mida del missatge es redueix significativament. Per a l’usuari, una
precision-recall mètric permet definir la relació entre el nivell de precisió i el
temps d’extracció de la informació. Alhora, el temps de recuperació es redueix
a un factor constant sota un nombre creixent de nodes. Finalment, l’algoritme
reparteix les dades amb la precisió exigida i ajusta la profunditat de la sol·licitud
segons les condicions de xarxa. Els algoritmes introduïts són prometedors per ser
utilitzats per l’agregació d’informació en nous sistemes de gestió de la informació
de gran escala en el futur
Trade-off among timeliness, messages and accuracy for large-Ssale information management
The increasing amount of data and the number of nodes in large-scale environments
require new techniques for information management. Examples of such environments
are the decentralized infrastructures of Computational Grid and Computational
Cloud applications. These large-scale applications need different kinds
of aggregated information such as resource monitoring, resource discovery or economic
information. The challenge of providing timely and accurate information
in large scale environments arise from the distribution of the information. Reasons
for delays in distributed information system are a long information transmission
time due to the distribution, churn and failures.
A problem of large applications such as peer-to-peer (P2P) systems is the increasing
retrieval time of the information due to the decentralization of the data
and the failure proneness. However, many applications need a timely information
provision. Another problem is an increasing network consumption when the application
scales to millions of users and data. Using approximation techniques allows
reducing the retrieval time and the network consumption. However, the usage of
approximation techniques decreases the accuracy of the results. Thus, the remaining
problem is to offer a trade-off in order to solve the conflicting requirements of
fast information retrieval, accurate results and low messaging cost.
Our goal is to reach a self-adaptive decision mechanism to offer a trade-off
among the retrieval time, the network consumption and the accuracy of the result.
Self-adaption enables distributed software to modify its behavior based on
changes in the operating environment. In large-scale information systems that use
hierarchical data aggregation, we apply self-adaptation to control the approximation
used for the information retrieval and reduces the network consumption and
the retrieval time. The hypothesis of the thesis is that approximation techniquescan reduce the retrieval time and the network consumption while guaranteeing an
accuracy of the results, while considering user’s defined priorities.
First, this presented research addresses the problem of a trade-off among a
timely information retrieval, accurate results and low messaging cost by proposing
a summarization algorithm for resource discovery in P2P-content networks.
After identifying how summarization can improve the discovery process, we propose
an algorithm which uses a precision-recall metric to compare the accuracy
and to offer a user-driven trade-off. Second, we propose an algorithm that applies
a self-adaptive decision making on each node. The decision is about the pruning
of the query and returning the result instead of continuing the query. The pruning
reduces the retrieval time and the network consumption at the cost of a lower accuracy
in contrast to continuing the query. The algorithm uses an analytic hierarchy
process to assess the user’s priorities and to propose a trade-off in order to satisfy
the accuracy requirements with a low message cost and a short delay.
A quantitative analysis evaluates our presented algorithms with a simulator,
which is fed with real data of a network topology and the nodes’ attributes. The
usage of a simulator instead of the prototype allows the evaluation in a large scale
of several thousands of nodes. The algorithm for content summarization is evaluated
with half a million of resources and with different query types. The selfadaptive
algorithm is evaluated with a simulator of several thousands of nodes
that are created from real data. A qualitative analysis addresses the integration
of the simulator’s components in existing market frameworks for Computational
Grid and Cloud applications.
The proposed content summarization algorithm reduces the information retrieval
time from a logarithmic increase to a constant factor. Furthermore, the
message size is reduced significantly by applying the summarization technique.
For the user, a precision-recall metric allows defining the relation between the retrieval
time and the accuracy. The self-adaptive algorithm reduces the number of
messages needed from an exponential increase to a constant factor. At the same
time, the retrieval time is reduced to a constant factor under an increasing number
of nodes. Finally, the algorithm delivers the data with the required accuracy
adjusting the depth of the query according to the network conditions.La gestió de la informació exigeix noves tècniques que tractin amb la creixent
quantitat de dades i nodes en entorns a gran escala. Alguns exemples d’aquests
entorns són les infraestructures descentralitzades de Computacional Grid i Cloud.
Les aplicacions a gran escala necessiten diferents classes d’informació agregada
com monitorització de recursos i informació econòmica. El desafiament de proporcionar
una provisió rà pida i acurada d’informació en ambients de grans escala
sorgeix de la distribució de la informació. Una raó és que el sistema d’informació
ha de tractar amb l’adaptabilitat i fracassos d’aquests ambients.
Un problema amb aplicacions molt grans com en sistemes peer-to-peer (P2P)
és el creixent temps de recuperació de l’informació a causa de la descentralització
de les dades i la facilitat al fracà s. No obstant això, moltes aplicacions necessiten
una provisió d’informació puntual. A més, alguns usuaris i aplicacions accepten
inexactituds dels resultats si la informació es reparteix a temps. A més i més, el
consum de xarxa creixent fa que sorgeixi un altre problema per l’escalabilitat del
sistema. La utilització de tècniques d’aproximació permet reduir el temps de recuperació
i el consum de xarxa. No obstant això, l’ús de tècniques d’aproximació
disminueix la precisió dels resultats. AixÃ, el problema restant és oferir un compromÃs
per resoldre els requisits en conflicte d’extracció de la informació rà pida,
resultats acurats i cost d’enviament baix.
El nostre objectiu és obtenir un mecanisme de decisió completament autoadaptatiu
per tal d’oferir el compromÃs entre temps de recuperació, consum de
xarxa i precisió del resultat. AutoadaptacÃo permet al programari distribuït modificar
el seu comportament en funció dels canvis a l’entorn d’operació. En sistemes
d’informació de gran escala que utilitzen agregació de dades jerà rquica,
l’auto-adaptació permet controlar l’aproximació utilitzada per a l’extracció de la informació i redueixen el consum de xarxa i el temps de recuperació. La hipòtesi
principal d’aquesta tesi és que els tècniques d’aproximació permeten reduir el
temps de recuperació i el consum de xarxa mentre es garanteix una precisió adequada
definida per l’usari.
La recerca que es presenta, introdueix un algoritme de sumarització de continguts
per a la descoberta de recursos a xarxes de contingut P2P. Després d’identificar
com sumarització pot millorar el procés de descoberta, proposem una mètrica que
s’utilitza per comparar la precisió i oferir un compromÃs definit per l’usuari. Després,
introduïm un algoritme nou que aplica l’auto-adaptació a un ordre per satisfer
els requisits de precisió amb un cost de missatge baix i un retard curt. Basat
en les prioritats d’usuari, l’algoritme troba automà ticament un compromÃs.
L’anà lisi quantitativa avalua els algoritmes presentats amb un simulador per
permetre l’evacuació d’uns quants milers de nodes. El simulador s’alimenta amb
dades d’una topologia de xarxa i uns atributs dels nodes reals. L’algoritme de
sumarització de contingut s’avalua amb mig milió de recursos i amb diferents
tipus de sol·licituds. L’anà lisi qualitativa avalua la integració del components del
simulador en estructures de mercat existents per a aplicacions de Computacional
Grid i Cloud. AixÃ, la funcionalitat implementada del simulador (com el procés
d’agregació i la query language) és comprovada per la integració de prototips.
L’algoritme de sumarització de contingut proposat redueix el temps d’extracció
de l’informació d’un augment logarÃtmic a un factor constant. A més, també permet
que la mida del missatge es redueix significativament. Per a l’usuari, una
precision-recall mètric permet definir la relació entre el nivell de precisió i el
temps d’extracció de la informació. Alhora, el temps de recuperació es redueix
a un factor constant sota un nombre creixent de nodes. Finalment, l’algoritme
reparteix les dades amb la precisió exigida i ajusta la profunditat de la sol·licitud
segons les condicions de xarxa. Els algoritmes introduïts són prometedors per ser
utilitzats per l’agregació d’informació en nous sistemes de gestió de la informació
de gran escala en el futur.Postprint (published version
Location based services in wireless ad hoc networks
In this dissertation, we investigate location based services in wireless ad hoc networks from four different aspects - i) location privacy in wireless sensor networks (privacy), ii) end-to-end secure communication in randomly deployed wireless sensor networks (security), iii) quality versus latency trade-off in content retrieval under ad hoc node mobility (performance) and iv) location clustering based Sybil attack detection in vehicular ad hoc networks (trust). The first contribution of this dissertation is in addressing location privacy in wireless sensor networks. We propose a non-cooperative sensor localization algorithm showing how an external entity can stealthily invade into the location privacy of sensors in a network. We then design a location privacy preserving tracking algorithm for defending against such adversarial localization attacks. Next we investigate secure end-to-end communication in randomly deployed wireless sensor networks. Here, due to lack of control on sensors\u27 locations post deployment, pre-fixing pairwise keys between sensors is not feasible especially under larger scale random deployments. Towards this premise, we propose differentiated key pre-distribution for secure end-to-end secure communication, and show how it improves existing routing algorithms. Our next contribution is in addressing quality versus latency trade-off in content retrieval under ad hoc node mobility. We propose a two-tiered architecture for efficient content retrieval in such environment. Finally we investigate Sybil attack detection in vehicular ad hoc networks. A Sybil attacker can create and use multiple counterfeit identities risking trust of a vehicular ad hoc network, and then easily escape the location of the attack avoiding detection. We propose a location based clustering of nodes leveraging vehicle platoon dispersion for detection of Sybil attacks in vehicular ad hoc networks --Abstract, page iii
- …