104 research outputs found
Twenty years of rewriting logic
AbstractRewriting logic is a simple computational logic that can naturally express both concurrent computation and logical deduction with great generality. This paper provides a gentle, intuitive introduction to its main ideas, as well as a survey of the work that many researchers have carried out over the last twenty years in advancing: (i) its foundations; (ii) its semantic framework and logical framework uses; (iii) its language implementations and its formal tools; and (iv) its many applications to automated deduction, software and hardware specification and verification, security, real-time and cyber-physical systems, probabilistic systems, bioinformatics and chemical systems
A policy-based architecture for virtual network embedding (PhD thesis)
Network virtualization is a technology that enables multiple virtual instances to coexist on a common physical network infrastructure. This paradigm fostered new business models, allowing infrastructure providers to lease or share their physical resources. Each virtual network is isolated and can be customized to support a new class of customers and applications. To this end, infrastructure providers need to embed virtual networks on their infrastructure. The virtual network embedding is the (NP-hard) problem of matching constrained virtual networks onto a physical network. Heuristics to solve the embedding problem have exploited several policies under different settings. For example, centralized solutions have been devised for small enterprise physical networks, while distributed solutions have been proposed over larger federated wide-area networks. In this thesis we present a policy-based architecture for the virtual network embedding problem. By policy, we mean a variant aspect of any of the three (invariant) embedding mechanisms: physical resource discovery, virtual network mapping, and allocation on the physical infrastructure. Our architecture adapts to different scenarios by instantiating appropriate policies, and has bounds on embedding enablesciency, and on convergence embedding time, over a single provider, or across multiple federated providers. The performance of representative novel and existing policy configuration are compared via extensive simulations, and over a prototype implementation. We also present an object model as a foundation for a protocol specification, and we release a testbed to enable users to test their own embedding policies, and to run applications within their virtual networks. The testbed uses a Linux system architecture to reserve virtual node and link capacities
Self-organizing Network Optimization via Placement of Additional Nodes
Das Hauptforschungsgebiet des Graduiertenkollegs "International Graduate
School on Mobile Communication" (GS Mobicom) der Technischen Universität
Ilmenau ist die Kommunikation in Katastrophenszenarien. Wegen eines
Desasters oder einer Katastrophe können die terrestrischen Elementen der
Infrastruktur eines Kommunikationsnetzwerks beschädigt oder komplett
zerstört werden. Dennoch spielen verfügbare Kommunikationsnetze eine sehr
wichtige Rolle während der Rettungsmaßnahmen, besonders für die
Koordinierung der Rettungstruppen und für die Kommunikation zwischen ihren
Mitgliedern. Ein solcher Service kann durch ein mobiles Ad-Hoc-Netzwerk
(MANET) zur Verfügung gestellt werden. Ein typisches Problem der MANETs
ist Netzwerkpartitionierung, welche zur Isolation von verschiedenen
Knotengruppen führt. Eine mögliche Lösung dieses Problems ist die
Positionierung von zusätzlichen Knoten, welche die Verbindung zwischen den
isolierten Partitionen wiederherstellen können. Hauptziele dieser Arbeit
sind die Recherche und die Entwicklung von Algorithmen und Methoden zur
Positionierung der zusätzlichen Knoten. Der Fokus der Recherche liegt auf
Untersuchung der verteilten Algorithmen zur Bestimmung der Positionen für
die zusätzlichen Knoten. Die verteilten Algorithmen benutzen nur die
Information, welche in einer lokalen Umgebung eines Knotens verfügbar ist,
und dadurch entsteht ein selbstorganisierendes System. Jedoch wird das
gesamte Netzwerk hier vor allem innerhalb eines ganz speziellen Szenarios -
Katastrophenszenario - betrachtet. In einer solchen Situation kann die
Information über die Topologie des zu reparierenden Netzwerks im Voraus
erfasst werden und soll, natürlich, für die Wiederherstellung mitbenutzt
werden. Dank der eventuell verfügbaren zusätzlichen Information können
die Positionen für die zusätzlichen Knoten genauer ermittelt werden. Die
Arbeit umfasst eine Beschreibung, Implementierungsdetails und eine
Evaluierung eines selbstorganisierendes Systems, welche die
Netzwerkwiederherstellung in beiden Szenarien ermöglicht.The main research area of the International Graduate School on Mobile
Communication (GS Mobicom) at Ilmenau University of Technology is
communication in disaster scenarios. Due to a disaster or an accident, the
network infrastructure can be damaged or even completely destroyed.
However, available communication networks play a vital role during the
rescue activities especially for the coordination of the rescue teams and
for the communication between their members. Such a communication service
can be provided by a Mobile Ad-Hoc Network (MANET). One of the typical
problems of a MANET is network partitioning, when separate groups of nodes
become isolated from each other. One possible solution for this problem is
the placement of additional nodes in order to reconstruct the communication
links between isolated network partitions. The primary goal of this work is
the research and development of algorithms and methods for the placement of
additional nodes. The focus of this research lies on the investigation of
distributed algorithms for the placement of additional nodes, which use
only the information from the nodes’ local environment and thus form a
self-organizing system. However, during the usage specifics of the system
in a disaster scenario, global information about the topology of the
network to be recovered can be known or collected in advance. In this case,
it is of course reasonable to use this information in order to calculate
the placement positions more precisely. The work provides the description,
the implementation details and the evaluation of a self-organizing system
which is able to recover from network partitioning in both situations
Distributed eventual leader election in the crash-recovery and general omission failure models.
102 p.Distributed applications are present in many aspects of everyday life. Banking, healthcare or transportation are examples of such applications. These applications are built on top of distributed systems. Roughly speaking, a distributed system is composed of a set of processes that collaborate among them to achieve a common goal. When building such systems, designers have to cope with several issues, such as different synchrony assumptions and failure occurrence. Distributed systems must ensure that the delivered service is trustworthy.Agreement problems compose a fundamental class of problems in distributed systems. All agreement problems follow the same pattern: all processes must agree on some common decision. Most of the agreement problems can be considered as a particular instance of the Consensus problem. Hence, they can be solved by reduction to consensus. However, a fundamental impossibility result, namely (FLP), states that in an asynchronous distributed system it is impossible to achieve consensus deterministically when at least one process may fail. A way to circumvent this obstacle is by using unreliable failure detectors. A failure detector allows to encapsulate synchrony assumptions of the system, providing (possibly incorrect) information about process failures. A particular failure detector, called Omega, has been shown to be the weakest failure detector for solving consensus with a majority of correct processes. Informally, Omega lies on providing an eventual leader election mechanism
Securely sharing dynamic medical information in e-health
This thesis has introduced an infrastructure to share dynamic medical data between mixed health care providers in a secure way, which could benefit the health care system as a whole. The study results of the universally data sharing into a varied patient information system prototypes
Improving Intrusion Prevention, Detection and Response
Merged with duplicate record 10026.1/479 on 10.04.2017 by CS (TIS)In the face of a wide range of attacks. Intrusion Detection Systems (IDS) and other Internet
security tools represent potentially valuable safeguards to identify and combat the problems
facing online systems. However, despite the fact that a variety o f commercial and open source
solutions are available across a range of operating systems and network platforms, it is notable
that the deployment of IDS is often markedly less than other well-known network security
countermeasures and other tools may often be used in an ineffective manner.
This thesis considers the challenges that users may face while using IDS, by conducting a web-based
questionnaire to assess these challenges. The challenges that are used in the questionnaire
were gathered from the well-established literature. The participants responses varies between
being with or against selecting them as challenges but all the listed challenges approved that
they are consider problems in the IDS field.
The aim of the research is to propose a novel set of Human Computer Interaction-Security
(HCI-S) usability criteria based on the findings of the web-based questionnaire. Moreover,
these criteria were inspired from previous literature in the field of HCI. The novelty of the
criteria is that they focus on the security aspects. The new criteria were promising when they
were applied to Norton 360, a well known Internet security suite. Testing the alerts issued by
security software was the initial step before testing other security software. Hence, a set of security software were selected and some alerts were triggered as a result of performing a
penetration test conducted within a test-bed environment using the network scanner Nmap. The
findings reveal that four of the HCI-S usability criteria were not fully addressed by all of these
security software.
Another aim of this thesis is to consider the development of a prototype to address the HCI-S
usability criteria that seem to be overlooked in the existing security solutions. The thesis
conducts a practical user trial and the findings are promising and attempt to find a proper
solution to solve this problem. For instance, to take advantage of previous security decisions, it
would be desirable for a system to consider the user's previous decisions on similar alerts, and
modify alerts accordingly to account for the user's previous behaviour. Moreover, in order to
give users a level of fiexibility, it is important to enable them to make informed decisions, and
to be able to recover from them if needed. It is important to address the proposed criteria that
enable users to confirm / recover the impact of their decision, maintain an awareness of system
status all the time, and to offer responses that match users' expectations.
The outcome of the current study is a set of a proposed 16 HCI-S usability criteria that can be
used to design and to assess security alerts issued by any Internet security suite. These criteria
are not equally important and they vary between high, medium and low.The embassy of the arab republic of Egypt (cultural centre & educational bureau) in Londo
Descoberta de recursos para sistemas de escala arbitrarias
Doutoramento em InformáticaTecnologias de Computação Distribuída em larga escala tais como Cloud,
Grid, Cluster e Supercomputadores HPC estão a evoluir juntamente com a
emergência revolucionária de modelos de múltiplos núcleos (por exemplo:
GPU, CPUs num único die, Supercomputadores em single die, Supercomputadores
em chip, etc) e avanços significativos em redes e soluções de
interligação. No futuro, nós de computação com milhares de núcleos podem
ser ligados entre si para formar uma única unidade de computação
transparente que esconde das aplicações a complexidade e a natureza distribuída desses sistemas com múltiplos núcleos. A fim de beneficiar de forma
eficiente de todos os potenciais recursos nesses ambientes de computação
em grande escala com múltiplos núcleos ativos, a descoberta de recursos é um elemento crucial para explorar ao máximo as capacidade de todos
os recursos heterogéneos distribuídos, através do reconhecimento preciso e
localização desses recursos no sistema. A descoberta eficiente e escalável
de recursos ´e um desafio para tais sistemas futuros, onde os recursos e as
infira-estruturas de computação e comunicação subjacentes são altamente
dinâmicas, hierarquizadas e heterogéneas. Nesta tese, investigamos o problema
da descoberta de recursos no que diz respeito aos requisitos gerais da
escalabilidade arbitrária de ambientes de computação futuros com múltiplos
núcleos ativos. A principal contribuição desta tese ´e a proposta de uma
entidade de descoberta de recursos adaptativa híbrida (Hybrid Adaptive
Resource Discovery - HARD), uma abordagem de descoberta de recursos eficiente
e altamente escalável, construída sobre uma sobreposição hierárquica
virtual baseada na auto-organizaçãoo e auto-adaptação de recursos de processamento
no sistema, onde os recursos computacionais são organizados
em hierarquias distribuídas de acordo com uma proposta de modelo de
descriçãoo de recursos multi-camadas hierárquicas. Operacionalmente, em
cada camada, que consiste numa arquitetura ponto-a-ponto de módulos que,
interagindo uns com os outros, fornecem uma visão global da disponibilidade
de recursos num ambiente distribuído grande, dinâmico e heterogéneo. O
modelo de descoberta de recursos proposto fornece a adaptabilidade e flexibilidade
para executar consultas complexas através do apoio a um conjunto
de características significativas (tais como multi-dimensional, variedade e
consulta agregada) apoiadas por uma correspondência exata e parcial, tanto
para o conteúdo de objetos estéticos e dinâmicos. Simulações mostram
que o HARD pode ser aplicado a escalas arbitrárias de dinamismo, tanto
em termos de complexidade como de escala, posicionando esta proposta
como uma arquitetura adequada para sistemas futuros de múltiplos núcleos.
Também contribuímos com a proposta de um regime de gestão eficiente
dos recursos para sistemas futuros que podem utilizar recursos distribuíos
de forma eficiente e de uma forma totalmente descentralizada. Além disso,
aproveitando componentes de descoberta (RR-RPs) permite que a nossa
plataforma de gestão de recursos encontre e aloque dinamicamente recursos
disponíeis que garantam os parâmetros de QoS pedidos.Large scale distributed computing technologies such as Cloud, Grid, Cluster
and HPC supercomputers are progressing along with the revolutionary emergence
of many-core designs (e.g. GPU, CPUs on single die, supercomputers
on chip, etc.) and significant advances in networking and interconnect solutions.
In future, computing nodes with thousands of cores may be connected
together to form a single transparent computing unit which hides from applications
the complexity and distributed nature of these many core systems. In
order to efficiently benefit from all the potential resources in such large scale
many-core-enabled computing environments, resource discovery is the vital
building block to maximally exploit the capabilities of all distributed heterogeneous
resources through precisely recognizing and locating those resources
in the system. The efficient and scalable resource discovery is challenging for
such future systems where the resources and the underlying computation and
communication infrastructures are highly-dynamic, highly-hierarchical and
highly-heterogeneous. In this thesis, we investigate the problem of resource
discovery with respect to the general requirements of arbitrary scale future
many-core-enabled computing environments. The main contribution of this
thesis is to propose Hybrid Adaptive Resource Discovery (HARD), a novel
efficient and highly scalable resource-discovery approach which is built upon
a virtual hierarchical overlay based on self-organization and self-adaptation
of processing resources in the system, where the computing resources are
organized into distributed hierarchies according to a proposed hierarchical
multi-layered resource description model. Operationally, at each layer, it
consists of a peer-to-peer architecture of modules that, by interacting with
each other, provide a global view of the resource availability in a large,
dynamic and heterogeneous distributed environment. The proposed resource
discovery model provides the adaptability and flexibility to perform complex
querying by supporting a set of significant querying features (such as
multi-dimensional, range and aggregate querying) while supporting exact
and partial matching, both for static and dynamic object contents. The
simulation shows that HARD can be applied to arbitrary scales of dynamicity,
both in terms of complexity and of scale, positioning this proposal as a
proper architecture for future many-core systems. We also contributed to
propose a novel resource management scheme for future systems which
efficiently can utilize distributed resources in a fully decentralized fashion.
Moreover, leveraging discovery components (RR-RPs) enables our resource
management platform to dynamically find and allocate available resources
that guarantee the QoS parameters on demand
Security in peer-to-peer communication systems
P2PSIP (Peer-to-Peer Session Initiation Protocol) is a protocol developed by the IETF (Internet Engineering Task Force) for the establishment, completion and modi¿cation of communication sessions that emerges as a complement to SIP (Session Initiation Protocol) in environments where the original SIP protocol may fail for technical, ¿nancial, security, or social reasons. In order to do so, P2PSIP systems replace all the architecture of servers of the original SIP systems used for the registration and location of users, by a structured P2P network that distributes these functions among all the user agents that are part of the system. This new architecture, as with any emerging system, presents a completely new security problematic which analysis, subject of this thesis, is of crucial importance for its secure development and future standardization.
Starting with a study of the state of the art in network security and continuing with more speci¿c systems such as SIP and P2P, we identify the most important security services within the architecture of a P2PSIP communication system: access control, bootstrap, routing, storage and communication. Once the security services have been identi¿ed, we conduct an analysis of the attacks that can a¿ect each of them, as well as a study of the existing countermeasures that can be used to prevent or mitigate these attacks. Based on the presented attacks and the weaknesses found in the existing measures to prevent them, we design speci¿c solutions to improve the security of P2PSIP communication systems. To this end, we focus on the service that stands as the cornerstone of P2PSIP communication systems¿ security: access control. Among the new designed solutions stand out: a certi¿cation model based on the segregation of the identity of users and nodes, a model for secure access control for on-the-¿y P2PSIP systems
and an authorization framework for P2PSIP systems built on the recently published Internet Attribute Certi¿cate Pro¿le for Authorization.
Finally, based on the existing measures and the new solutions designed, we de¿ne a set of security recommendations that should be considered for the design, implementation and maintenance of P2PSIP communication systems.Postprint (published version
Detecting worm mutations using machine learning
Worms are malicious programs that spread over the Internet without human intervention. Since worms generally spread faster than humans can respond, the only viable defence is to automate their detection.
Network intrusion detection systems typically detect worms by examining packet or flow logs for known signatures. Not only does this approach mean that new worms cannot be detected until the corresponding signatures are created, but that mutations of known worms will remain undetected because each mutation will usually have a different signature. The intuitive and seemingly most effective solution is to write more generic signatures, but this has been found to increase false alarm rates and is thus impractical.
This dissertation investigates the feasibility of using machine learning to automatically detect mutations of known worms. First, it investigates whether Support Vector Machines can detect mutations of known worms.
Support Vector Machines have been shown to be well suited to pattern recognition tasks such as text categorisation and hand-written digit recognition. Since detecting worms is effectively a pattern recognition problem, this work investigates how well Support Vector Machines perform at this task.
The second part of this dissertation compares Support Vector Machines to other machine learning techniques in detecting worm mutations.
Gaussian Processes, unlike Support Vector Machines, automatically return confidence values as part of their result. Since confidence values can be used to reduce false alarm rates, this dissertation determines how Gaussian Process compare to Support Vector Machines in terms of detection accuracy. For further comparison, this work also compares Support Vector Machines to K-nearest neighbours, known for its simplicity and solid results in other domains.
The third part of this dissertation investigates the automatic generation of training data. Classifier accuracy depends on good quality training data -- the wider the training data spectrum, the higher the classifier's accuracy.
This dissertation describes the design and implementation of a worm mutation generator whose output is fed to the machine learning techniques as training data. This dissertation then evaluates whether the training data can be used to train classifiers of sufficiently high quality to detect worm mutations.
The findings of this work demonstrate that Support Vector Machines can be used to detect worm mutations, and that the optimal configuration for detection of worm mutations is to use a linear kernel with unnormalised bi-gram frequency counts. Moreover, the results show that Gaussian Processes and Support Vector Machines exhibit similar accuracy on average in detecting worm mutations, while K-nearest neighbours consistently produces lower quality predictions. The generated worm mutations are shown to be of sufficiently high quality to serve as training data.
Combined, the results demonstrate that machine learning is capable of accurately detecting mutations of known worms
- …