24 research outputs found
Practical applications of performance modelling of security protocols using PEPA
PhD ThesisTrade-off between security and performance has become an intriguing area in recent years in both the security and performance communities. As the security aspects of security protocol research is fully-
edged, this thesis is therefore
devoted to conducting a performance study of these protocols. The long term objective is to translate formal de nitions of security protocols to formal performance models automatically, then analysing by relevant techniques. In this thesis, we take a preliminary step by studying five typical security protocols, and exploring the methodology of construction and analysis of their models by using the Markovian process algebra PEPA. Through these case studies, an initial framework of performance analysis of security protocol is established.
Firstly, a key distribution centre is investigated. The basic model su ers from the
commonly encountered state space explosion problem, and so we apply some efficient solution techniques, which include model reduction techniques and ordinary
di fferential equation based fluid flow analysis. Finally, we evaluate a utility function for this secure key exchange model. Then, we explore two non-repudiation
protocols. Mean value analysis has been applied here for a class of PEPA models,
and it is compared with an ODE approximation. After that, an optimistic nonrepudiation
protocol with off-line third trust party is studied. The PEPA model has been formulated using a concept of multi-threaded servers with functional rates. The nal case study is a cross-realm Kerberos protocol. A simplified
technique of aggregation with an ODE approximation is performed to do efficient
cient analysis. All these modelling and analysis methods are illustrated through
numerical examples
Performance modelling of applications in a smart environment
PhD ThesisIn today’s world, advanced computing technology has been widely used to improve
our living conditions and facilitate people’s daily activities. Smart environment
technology, including kinds of smart devices and intelligent systems, is now being
researched to provide an advanced intelligent life, easy, comfortable environment.
This thesis is aimed to investigate several related technologies corresponding to the
design of a smart environment. Meanwhile, this thesis also explores different
modelling approaches including formal methods and discrete event simulation.
The core contents of the thesis include performance evaluation of scheduling
policies and capacity planning strategies. The main contribution is in developing a
modelling approach for smart hospital environments. This thesis also provides
valuable experience in the formal modelling and the simulation of large scale
systems.
The chief findings are that the dynamic scheduling policy is proved to be the most
efficient approach in the scheduling process; and a capacity scheme is also verified
as the optimal scheme to obtain the high work efficiency under the condition of
limited human resource.
The main methods used for the performance modelling are Performance Evaluation
Process Algebra (PEPA) and discrete event simulation. A great deal of modelling
tasks was completed with these methods. For the analysis, we adopt both numerical
analysis based on PEPA models and statistical measurements in the simulation
Emergency warning messages dissemination in vehicular social networks: A trust based scheme
To ensure users' safety on the road, a plethora of dissemination schemes for Emergency Warning Messages (EWMs) have been proposed in vehicular networks. However, the issue of false alarms triggered by malicious users still poses serious challenges, such as disruption of vehicular traffic especially on highways leading to precarious effects. This paper proposes a novel Trust based Dissemination Scheme (TDS) for EWMs in Vehicular Social Networks (VSNs) to solve the aforementioned issue. To ensure the authenticity of EWMs, we exploit the user-post credibility network for identifying true and false alarms. Moreover, we develop a reputation mechanism by calculating a trust-score for each node based on its social-utility, behavior, and contribution in the network. We utilize the hybrid architecture of VSNs by employing social-groups based dissemination in Vehicle-to-Infrastructure (V2I) mode, whereas nodes' friendship-network in Vehicle-to-Vehicle (V2V) mode. We analyze the proposed scheme for accuracy by extensive simulations under varying malicious nodes ratio in the network. Furthermore, we compare the efficiency of TDS with state-of-the-art dissemination schemes in VSNs for delivery ratio, transmission delay, number of transmissions, and hop-count. The experimental results validate the significant efficacy of TDS in accuracy and aforementioned network parameters. © 2019 Elsevier Inc
A multifaceted formal analysis of end-to-end encrypted email protocols and cryptographic authentication enhancements
Largely owing to cryptography, modern messaging tools (e.g., Signal) have reached a considerable degree of sophistication, balancing advanced security features with high usability. This has not been the case for email, which however, remains the most pervasive and interoperable form of digital communication. As sensitive information (e.g., identification documents, bank statements, or the message in the email itself) is frequently exchanged by this means, protecting the privacy of email communications is a justified concern which has been emphasized in the last years.
A great deal of effort has gone into the development of tools and techniques for providing email communications with privacy and security, requirements that were not originally considered. Yet, drawbacks across several dimensions hinder the development of a global solution that would strengthen security while maintaining the standard features that we expect from email clients.
In this thesis, we present improvements to security in email communications. Relying on formal methods and cryptography, we design and assess security protocols and analysis techniques, and propose enhancements to implemented approaches for end-to-end secure email communication.
In the first part, we propose a methodical process relying on code reverse engineering, which we use to abstract the specifications of two end-to-end security protocols from a secure email solution (called pEp); then, we apply symbolic verification techniques to analyze such protocols with respect to privacy and authentication properties. We also introduce a novel formal framework that enables a system's security analysis aimed at detecting flaws caused by possible discrepancies between the user's and the system's assessment of security. Security protocols, along with user perceptions and interaction traces, are modeled as transition systems; socio-technical security properties are defined as formulas in computation tree logic (CTL), which can then be verified by model checking.
Finally, we propose a protocol that aims at securing a password-based authentication system designed to detect the leakage of a password database, from a code-corruption attack.
In the second part, the insights gained by the analysis in Part I allow us to propose both, theoretical and practical solutions for improving security and usability aspects, primarily of email communication, but from which secure messaging solutions can benefit too. The first enhancement concerns the use of password-authenticated key exchange (PAKE) protocols for entity authentication in peer-to-peer decentralized settings, as a replacement for out-of-band channels; this brings provable security to the so far empirical process, and enables the implementation of further security and usability properties (e.g., forward secrecy, secure secret retrieval). A second idea refers to the protection of weak passwords at rest and in transit, for which we propose a scheme based on the use of a one-time-password; furthermore, we consider potential approaches for improving this scheme.
The hereby presented research was conducted as part of an industrial partnership between SnT/University of Luxembourg and pEp Security S.A
Resilience-Building Technologies: State of Knowledge -- ReSIST NoE Deliverable D12
This document is the first product of work package WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellenc
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
Recommended from our members
Modelling and Quantitative Analysis of Performance vs Security Trade-offs in Computer Networks: An investigation into the modelling and discrete-event simulation analysis of performance vs security trade-offs in computer networks, based on combined metrics and stochastic activity networks (SANs)
Performance modelling and evaluation has long been considered of paramount
importance to computer networks from design through development, tuning and
upgrading. These networks, however, have evolved significantly since their first introduction
a few decades ago. The Ubiquitous Web in particular with fast-emerging
unprecedented services has become an integral part of everyday life. However, this
all is coming at the cost of substantially increased security risks. Hence cybercrime is
now a pervasive threat for today’s internet-dependent societies. Given the frequency
and variety of attacks as well as the threat of new, more sophisticated and destructive
future attacks, security has become more prevalent and mounting concern in
the design and management of computer networks. Therefore equally important if
not more so is security.
Unfortunately, there is no one-size-fits-all solution to security challenges. One security
defence system can only help to battle against a certain class of security threats. For overall security, a holistic approach including both reactive and proactive
security measures is commonly suggested. As such, network security may have
to combine multiple layers of defence at the edge and in the network and in its
constituent individual nodes.
Performance and security, however, are inextricably intertwined as security measures
require considerable amounts of computational resources to execute. Moreover, in
the absence of appropriate security measures, frequent security failures are likely
to occur, which may catastrophically affect network performance, not to mention
serious data breaches among many other security related risks.
In this thesis, we study optimisation problems for the trade-offs between performance
and security as they exist between performance and dependability. While
performance metrics are widely studied and well-established, those of security are
rarely defined in a strict mathematical sense. We therefore aim to conceptualise and
formulate security by analogy with dependability so that, like performance, it can
be modelled and quantified.
Having employed a stochastic modelling formalism, we propose a new model for a
single node of a generic computer network that is subject to various security threats.
We believe this nodal model captures both performance and security aspects of a
computer node more realistically, in particular the intertwinements between them.
We adopt a simulation-based modelling approach in order to identify, on the basis
of combined metrics, optimal trade-offs between performance and security and facilitate
more sophisticated trade-off optimisation studies in the field.
We realise that system parameters can be found that optimise these abstract combined
metrics, while they are optimal neither for performance nor for security individually.
Based on the proposed simulation modelling framework, credible numerical
experiments are carried out, indicating the scope for further work extensions for a
systematic performance vs security tuning of computer networks
Investigations into Elasticity in Cloud Computing
The pay-as-you-go model supported by existing cloud infrastructure providers
is appealing to most application service providers to deliver their
applications in the cloud. Within this context, elasticity of applications has
become one of the most important features in cloud computing. This elasticity
enables real-time acquisition/release of compute resources to meet application
performance demands. In this thesis we investigate the problem of delivering
cost-effective elasticity services for cloud applications.
Traditionally, the application level elasticity addresses the question of how
to scale applications up and down to meet their performance requirements, but
does not adequately address issues relating to minimising the costs of using
the service. With this current limitation in mind, we propose a scaling
approach that makes use of cost-aware criteria to detect the bottlenecks within
multi-tier cloud applications, and scale these applications only at bottleneck
tiers to reduce the costs incurred by consuming cloud infrastructure resources.
Our approach is generic for a wide class of multi-tier applications, and we
demonstrate its effectiveness by studying the behaviour of an example
electronic commerce site application.
Furthermore, we consider the characteristics of the algorithm for
implementing the business logic of cloud applications, and investigate the
elasticity at the algorithm level: when dealing with large-scale data under
resource and time constraints, the algorithm's output should be elastic with
respect to the resource consumed. We propose a novel framework to guide the
development of elastic algorithms that adapt to the available budget while
guaranteeing the quality of output result, e.g. prediction accuracy for
classification tasks, improves monotonically with the used budget.Comment: 211 pages, 27 tables, 75 figure
A Trust Management Framework for Vehicular Ad Hoc Networks
The inception of Vehicular Ad Hoc Networks (VANETs) provides an opportunity for road users and public infrastructure to share information that improves the operation of roads and the driver experience. However, such systems can be vulnerable to malicious external entities and legitimate users. Trust management is used to address attacks from legitimate users in accordance with a user’s trust score. Trust models evaluate messages to assign rewards or punishments. This can be used to influence a driver’s future behaviour or, in extremis, block the driver. With receiver-side schemes, various methods are used to evaluate trust including, reputation computation, neighbour recommendations, and storing historical information. However, they incur overhead and add a delay when deciding whether to accept or reject messages. In this thesis, we propose a novel Tamper-Proof Device (TPD) based trust framework for managing trust of multiple drivers at the sender side vehicle that updates trust, stores, and protects information from malicious tampering. The TPD also regulates, rewards, and punishes each specific driver, as required. Furthermore, the trust score determines the classes of message that a driver can access. Dissemination of feedback is only required when there is an attack (conflicting information). A Road-Side Unit (RSU) rules on a dispute, using either the sum of products of trust and feedback or official vehicle data if available. These “untrue attacks” are resolved by an RSU using collaboration, and then providing a fixed amount of reward and punishment, as appropriate. Repeated attacks are addressed by incremental punishments and potentially driver access-blocking when conditions are met. The lack of sophistication in this fixed RSU assessment scheme is then addressed by a novel fuzzy logic-based RSU approach. This determines a fairer level of reward and punishment based on the severity of incident, driver past behaviour, and RSU confidence. The fuzzy RSU controller assesses judgements in such a way as to encourage drivers to improve their behaviour. Although any driver can lie in any situation, we believe that trustworthy drivers are more likely to remain so, and vice versa. We capture this behaviour in a Markov chain model for the sender and reporter driver behaviours where a driver’s truthfulness is influenced by their trust score and trust state. For each trust state, the driver’s likelihood of lying or honesty is set by a probability distribution which is different for each state. This framework is analysed in Veins using various classes of vehicles under different traffic conditions. Results confirm that the framework operates effectively in the presence of untrue and inconsistent attacks. The correct functioning is confirmed with the system appropriately classifying incidents when clarifier vehicles send truthful feedback. The framework is also evaluated against a centralized reputation scheme and the results demonstrate that it outperforms the reputation approach in terms of reduced communication overhead and shorter response time. Next, we perform a set of experiments to evaluate the performance of the fuzzy assessment in Veins. The fuzzy and fixed RSU assessment schemes are compared, and the results show that the fuzzy scheme provides better overall driver behaviour. The Markov chain driver behaviour model is also examined when changing the initial trust score of all drivers
Recommended from our members
Building robust and modular question answering systems
Over the past few years, significant progress has been made in QA systems due to the availability of annotated datasets on a large scale and the impressive advancements in large-scale pre-trained language models. Despite these successes, the black-box nature of end-to-end trained QA systems makes them hard to interpret and control. When these systems encounter inputs that deviate from their training data distribution or are subjected to adversarial perturbations, their performance tends to deteriorate by a large margin. Furthermore, they may occasionally produce unanticipated results, potentially leading to confusion among users. Additionally, this deficiency in robustness and interpretability poses challenges when deploying such models in real-world scenarios.
In this dissertation, we aim to build robust QA systems by explicitly decomposing various QA tasks into distinct sub-modules, each responsible for a particular aspect of the overall QA process. Through this decomposition, we seek to achieve improved performance in terms of both the system's ability to handle diverse and challenging inputs (robustness) and its capacity to provide transparent and explainable reasoning (interpretability).
To address the aforementioned limitations, in this dissertation, we aim to build robust QA models by explicitly decomposing different QA tasks into different sub-modules. We argue that utilizing these sub-modules can substantially improve the robustness and interpretability of different QA systems. In the first half of this dissertation, we introduce three sub-modules to mitigate the dataset artifacts that models learn from datasets. These sub-modules also enable us to examine and exert explicit control over the intermediate outputs. In the first work, to address question answering that requires multi-hop reasoning, we propose a chain extractor, which extracts the reasoning chains necessary for models to derive the final answer. The reasoning chains not only prevent the model from exploiting reasoning shortcuts but also provide an explanation of how the answer is derived. In the second work, we incorporate an alignment layer between the question and the context before generating the answer. This alignment layer can help us interpret the models' behavior and improve the robustness of adversarial settings. In the third work, we add an answer verifier after QA models generate the answer. This verifier can boost QA models' prediction confidence across several different domains and help us spot cases where QA models predict the right answer for the wrong reason by utilizing the external NLI datasets and models.
In the second half of this dissertation, we tackle the problem of complex fact-checking in the real world by treating it as a modularized QA task. We first decompose a complex claim into several yes-no subquestions whose answer directly contributes to the veracity of the claim. Then, each sub-question is fed into a commercial search engine to retrieve relevant documents. Additionally, we extract the relevant snippets in the retrieved documents and use a GPT3-based summarizer to generate the core evidence for checking the claim. We show that the decompositions can play an important role in both evidence retrieval and veracity composition of an explainable fact-checking system. Also, we show the GPT3-based evidence summarizer generates faithful summaries of documents most of the time indicating it can be used as an
effective part of the pipeline. Moreover, we annotate a dataset -- ClaimDecomp, containing 1,200 complex claims and the decompositions. We believe that this dataset can further promote building explainable fact-checking systems and analyzing complex claims in the real world.Computer Science