16 research outputs found
Light-Weight Techniques for Improving the Controllability and Efficiency of ISA-Level Fault Injection Tools
ISA-level fault injection, i.e. the injection of bit-flip faults in Instruction Set Architecture (ISA) registers and main memory words, is widely used for studying the impact of transient and intermittent hardware faults. ISA-level fault injection tools can be characterized by different properties such as repeatability, observability, reachability, intrusiveness, efficiency and controllability. This paper presents two pre-injection analysis techniques that improve controllability and efficiency using object code analysis. To improve controllability, we propose a technique for identifying the type of data that is stored in a potential target location. This allows the user to selectively direct fault injections to addresses, data and/or control information. Experimental results show that the data type of 84-100% of the targets locations in 8 programs were successfully identified by this technique. The second technique improves efficiency by fault pruning, i.e., by avoiding injection of faults that is known a priori to be detected by the tested system. This technique leverage the fact that faults in certain bits in the program counter and the stack pointer are always detected by machine exceptions. We show that exclusion of these bits from the fault space could significantly prune the fault space and reduce the time it takes to conduct a fault injection campaign
Process-Aware Defenses for Cyber-Physical Systems
The increasing connectivity is exposing safety-critical systems to cyberattacks that can cause real physical damage and jeopardize human lives. With billions of IoT devices added to the Internet every year, the cybersecurity landscape is drastically shifting from IT systems and networks to systems that comprise both cyber and physical components, commonly referred to as cyber-physical systems (CPS). The difficulty of applying classical IT security solutions in CPS environments has given rise to new security techniques known as process-aware defense mechanisms, which are designed to monitor and protect industrial processes supervised and controlled by cyber elements from sabotage attempts via cyberattacks. In this thesis, we critically examine the emerging CPS-driven cybersecurity landscape and investigate how process-aware defenses can contribute to the sustainability of highly connected cyber-physical systems by making them less susceptible to crippling cyberattacks. We introduce a novel data-driven model-free methodology for real-time monitoring of physical processes to detect and report suspicious behaviour before damage occurs. We show how our model-free approach is very lightweight, does not require detailed specifications, and is applicable in various CPS environments including IoT systems and networks. We further design, implement, evaluate, and deploy process-aware techniques, study their efficacy and applicability in real-world settings, and address their deployment challenges
Comunicações confiáveis sem-fios para redes veiculares
Vehicular communications are a promising field of research, with numerous
potential services that can enhance traffic experience. Road safety is the
most important objective behind the development of wireless vehicular networks,
since many of the current accidents and fatalities could be avoided if
vehicles had the ability to share information among them, with the road-side
infrastructure and other road users.
A future with safe, efficient and comfortable road transportation systems is envisaged
by the different traffic stakeholders - users, manufacturers, road operators
and public authorities. Cooperative Intelligent Transportation Systems
(ITS) applications will contribute to achieve this goal, as well as other technological
progress, such as automated driving or improved road infrastructure
based on advanced sensoring and the Internet of Things (IoT) paradigm.
Despite these significant benefits, the design of vehicular communications
systems poses difficult challenges, mainly due to the very dynamic environments
in which they operate. In order to attain the safety-critical requirements
involved in this type of scenarios, careful planning is necessary, so that a trustworthy
behaviour of the system can be achieved. Dependability and real-time
systems concepts provide essential tools to handle this challenging task of
enabling determinism and fault-tolerance in vehicular networks.
This thesis aims to address some of these issues by proposing architectures
and implementing mechanisms that improve the dependability levels of realtime
vehicular communications. The developed strategies always try to preserve
the required system’s flexibity, a fundamental property in such unpredictable
scenarios, where unexpected events may occur and force the system
to quickly adapt to the new circumnstances.The core contribution of this thesis focuses on the design of a fault-tolerant architecture
for infrastructure-based vehicular networks. It encompasses a set
of mechanisms that allow error detection and fault-tolerant behaviour both in
the mobile and static nodes of the network. Road-side infrastructure plays
a key role in this context, since it provides the support for coordinating all
communications taking place in the wireless medium. Furthermore, it is also
responsible for admission control policies and exchanging information with the
backbone network. The proposed methods rely on a deterministic medium
access control (MAC) protocol that provides real-time guarantees in wireless
channel access, ensuring that communications take place before a given deadline.
However, the presented solutions are generic and can be easily adapted
to other protocols and wireless technologies.
Interference mitigation techniques, mechanisms to enforce fail-silent behaviour
and redundancy schemes are introduced in this work, so that vehicular
communications systems may present higher dependability levels. In addition
to this, all of these methods are included in the design of vehicular network
components, guaranteeing that the real-time constraints are still fulfilled.
In conclusion, wireless vehicular networks hold the potential to drastically improve
road safety. However, these systems should present dependable behaviour
in order to reliably prevent the occurrence of catastrophic events under
all possible traffic scenarios.As comunicações veiculares são uma área de investigação bastante promissora,
com inúmeros potenciais serviços que podem melhorar a experiência
vivida no tráfego. A segurança rodoviária é o objectivo mais importante por
detrás do desenvolvimento das redes veiculares sem-fios, visto que muitos
dos atuais acidentes e vítimas mortais poderiam ser evitados caso os veículos
tivessem a capacidade de trocar informação entre eles, com a infraestrutura
rodoviária e outros utilizadores da estrada.
Um futuro com sistemas de transporte rodoviário seguros, eficientes e confortáveis
é algo ambicionado pelas diferentes partes envolvidas - utilizadores, fabricantes,
operadores da infraestrutura e autoridades públicas. As aplicações
de Sistemas Inteligentes de Transporte (ITS) cooperativas vão contribuir para
alcançar este propósito, em conjunto com outros avanços tecnológicos, nomeadamente
a condução autónoma ou uma melhor infraestrutura rodoviária
baseada em sensorização avançada e no paradigma da Internet das Coisas
(IoT).
Apesar destes benefícios significativos, o desenho de sistemas de comunicações
veiculares coloca desafios difíceis, em grande parte devido aos ambientes
extremamente dinâmicos em que estes operam. De modo a atingir
os requisitos de segurança crítica envolvidos neste tipo de cenários, é necessário
um cuidadoso planeamento por forma a que o sistema apresente um
comportamento confiável. Conceitos de dependabilidade e de sistemas de
tempo-real constituem ferramentas essenciais para lidar com esta desafiante
tarefa de dotar as redes veiculares de determinismo e tolerância a faltas.
Esta tese pretende endereçar alguns destes problemas através da proposta
de arquitecturas e da implementação de mecanismos que melhorem os níveis
da dependabilidade das comunicações veiculares de tempo-real. As estratégias
desenvolvidas tentam sempre preservar a necessária flexibilidade do
sistema, uma propriedade fundamental em cenários tão imprevisíveis, onde
eventos inesperados podem ocorrer e forçar o sistema a adaptar-se rapidamente
às novas circunstâncias.A contribuição principal desta tese foca-se no desenho de uma arquitectura
tolerante a faltas para redes veiculares com suporte da infraestrutura de beira
de estrada. Esta arquitectura engloba um conjunto de mecanismos que permite
detecção de erros e comportamento tolerante a faltas, tanto nos nós móveis
como nos nós estáticos da rede. A infraestrutura de beira de estrada desempenha
um papel fundamental neste contexto, pois fornece o suporte que
permite coordenar todas as comunicações que ocorrem no meio sem-fios.
Para além disso, é também responsável pelos mecanismos de controlo de
admissão e pela troca de informação com a rede de transporte. Os métodos
propostos baseiam-se num protocolo determinístico de controlo de acesso ao
meio (MAC) que fornece garantias de tempo-real no accesso ao canal semfios,
assegurando que as comunicações ocorrem antes de um determinado
limite temporal. No entanto, as soluções apresentadas são genéricas e podem
ser facilmente adaptadas a outros protocolos e tecnologias sem-fios.
Neste trabalho são introduzidas técnicas de mitigação de interferência, mecanismos
para assegurar comportamento falha-silêncio e esquemas de redundância,
de modo a que os sistemas de comunicações veiculares apresentem
elevados níveis de dependabilidade. Além disso, todos estes métodos são incorporados
no desenho dos componentes da rede veicular, guarantindo que
as restrições de tempo-real continuam a ser cumpridas.
Em suma, as redes veiculares sem-fios têm o potential para melhorar drasticamente
a segurança rodoviária. Contudo, estes sistemas precisam de apresentar
um comportamento confiável, de forma a prevenir a ocorrência de
eventos catastróficos em todos os cenários de tráfego possíveis.Programa Doutoral em Telecomunicaçõe
A reactive architecture for cloud-based system engineering
PhD ThesisSoftware system engineering is increasingly practised over globally distributed locations. Such a practise is termed as Global Software Development (GSD). GSD has become a business necessity mainly because of the
scarcity of resources, cost, and the need to locate development closer to
the customers. GSD is highly dependent on requirements management,
but system requirements continuously change. Poorly managed change in
requirements affects the overall cost, schedule and quality of GSD projects.
It is particularly challenging to manage and trace such changes, and hence
we require a rigorous requirement change management (RCM) process.
RCM is not trivial in collocated software development; and with the presence of geographical, cultural, social and temporal factors, it makes RCM
profoundly difficult for GSD. Existing RCM methods do not take into
consideration these issues faced in GSD. Considering the state-of-the-art
in RCM, design and analysis of architecture, and cloud accountability,
this work contributes:
1. an alternative and novel mechanism for effective information and
knowledge-sharing towards RCM and traceability.
2. a novel methodology for the design and analysis of small-to-medium
size cloud-based systems, with a particular focus on the trade-off of
quality attributes.
3. a dependable framework that facilitates the RCM and traceability
method for cloud-based system engineering.
4. a novel methodology for assuring cloud accountability in terms of
dependability.
5. a cloud-based framework to facilitate the cloud accountability methodology.
The results show a traceable RCM linkage between system engineering
processes and stakeholder requirements for cloud-based GSD projects,
which is better than existing approaches. Also, the results show an improved dependability assurance of systems interfacing with the unpredictable cloud environment. We reach the conclusion that RCM with
a clear focus on traceability, which is then facilitated by a dependable
framework, improves the chance of developing a cloud-based GSD project
successfully
Certifications of Critical Systems – The CECRIS Experience
In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases
Certifications of Critical Systems – The CECRIS Experience
In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases
Hardware-Aware Algorithm Designs for Efficient Parallel and Distributed Processing
The introduction and widespread adoption of the Internet of Things, together with emerging new industrial applications, bring new requirements in data processing. Specifically, the need for timely processing of data that arrives at high rates creates a challenge for the traditional cloud computing paradigm, where data collected at various sources is sent to the cloud for processing. As an approach to this challenge, processing algorithms and infrastructure are distributed from the cloud to multiple tiers of computing, closer to the sources of data. This creates a wide range of devices for algorithms to be deployed on and software designs to adapt to.In this thesis, we investigate how hardware-aware algorithm designs on a variety of platforms lead to algorithm implementations that efficiently utilize the underlying resources. We design, implement and evaluate new techniques for representative applications that involve the whole spectrum of devices, from resource-constrained sensors in the field, to highly parallel servers. At each tier of processing capability, we identify key architectural features that are relevant for applications and propose designs that make use of these features to achieve high-rate, timely and energy-efficient processing.In the first part of the thesis, we focus on high-end servers and utilize two main approaches to achieve high throughput processing: vectorization and thread parallelism. We employ vectorization for the case of pattern matching algorithms used in security applications. We show that re-thinking the design of algorithms to better utilize the resources available in the platforms they are deployed on, such as vector processing units, can bring significant speedups in processing throughout. We then show how thread-aware data distribution and proper inter-thread synchronization allow scalability, especially for the problem of high-rate network traffic monitoring. We design a parallelization scheme for sketch-based algorithms that summarize traffic information, which allows them to handle incoming data at high rates and be able to answer queries on that data efficiently, without overheads.In the second part of the thesis, we target the intermediate tier of computing devices and focus on the typical examples of hardware that is found there. We show how single-board computers with embedded accelerators can be used to handle the computationally heavy part of applications and showcase it specifically for pattern matching for security-related processing. We further identify key hardware features that affect the performance of pattern matching algorithms on such devices, present a co-evaluation framework to compare algorithms, and design a new algorithm that efficiently utilizes the hardware features.In the last part of the thesis, we shift the focus to the low-power, resource-constrained tier of processing devices. We target wireless sensor networks and study distributed data processing algorithms where the processing happens on the same devices that generate the data. Specifically, we focus on a continuous monitoring algorithm (geometric monitoring) that aims to minimize communication between nodes. By deploying that algorithm in action, under realistic environments, we demonstrate that the interplay between the network protocol and the application plays an important role in this layer of devices. Based on that observation, we co-design a continuous monitoring application with a modern network stack and augment it further with an in-network aggregation technique. In this way, we show that awareness of the underlying network stack is important to realize the full potential of the continuous monitoring algorithm.The techniques and solutions presented in this thesis contribute to better utilization of hardware characteristics, across a wide spectrum of platforms. We employ these techniques on problems that are representative examples of current and upcoming applications and contribute with an outlook of emerging possibilities that can build on the results of the thesis
Mathematics in Software Reliability and Quality Assurance
This monograph concerns the mathematical aspects of software reliability and quality assurance and consists of 11 technical papers in this emerging area. Included are the latest research results related to formal methods and design, automatic software testing, software verification and validation, coalgebra theory, automata theory, hybrid system and software reliability modeling and assessment