61 research outputs found
Validation of aproximate dependability models of a RAID architecture with orthogonal organization
RAID (Redundant Array of Inexpensive Disks) are widely used in storage servers. Level-5 RAID is one of the most popular RAID architectures. Numerical analysis of exact Markovian dependability models of level-5 RAID architecture with orthogonal organization is unfeasible for many realistic model parameters due to the size of the resulting state space. In this paper we develop approximate dependability models for a level-5 RAID architecture with orthogonal organization which have small state spaces. We consider two measures: the steady-state unavailability and the unreliability.
The models encompass disk hot spares and imperfect disk reconstruction. Using bounding techniques we analyze the accuracy of the models and show that the models are extremely accurate.Postprint (published version
Redundant disk arrays: Reliable, parallel secondary storage
During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures
Dependable Embedded Systems
This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
Security Analysis of System Behaviour - From "Security by Design" to "Security at Runtime" -
The Internet today provides the environment for novel applications and
processes which may evolve way beyond pre-planned scope and
purpose. Security analysis is growing in complexity with the increase
in functionality, connectivity, and dynamics of current electronic
business processes. Technical processes within critical
infrastructures also have to cope with these developments. To tackle
the complexity of the security analysis, the application of models is
becoming standard practice. However, model-based support for security
analysis is not only needed in pre-operational phases but also during
process execution, in order to provide situational security awareness
at runtime.
This cumulative thesis provides three major contributions to modelling
methodology.
Firstly, this thesis provides an approach for model-based analysis and
verification of security and safety properties in order to support
fault prevention and fault removal in system design or redesign.
Furthermore, some construction principles for the design of
well-behaved scalable systems are given.
The second topic is the analysis of the exposition of vulnerabilities
in the software components of networked systems to exploitation by
internal or external threats. This kind of fault forecasting allows
the security assessment of alternative system configurations and
security policies. Validation and deployment of security policies
that minimise the attack surface can now improve fault tolerance and
mitigate the impact of successful attacks.
Thirdly, the approach is extended to runtime applicability. An
observing system monitors an event stream from the observed system
with the aim to detect faults - deviations from the specified
behaviour or security compliance violations - at runtime.
Furthermore, knowledge about the expected behaviour given by an
operational model is used to predict faults in the near
future. Building on this, a holistic security management strategy is
proposed. The architecture of the observing system is described and
the applicability of model-based security analysis at runtime is
demonstrated utilising processes from several industrial scenarios.
The results of this cumulative thesis are provided by 19 selected
peer-reviewed papers
Partial replication in the database state machine
Tese de Doutoramento em Informática - Ramo do Conhecimento em Tecnologias da ProgramaçãoEnterprise information systems are nowadays commonly structured as multi-tier
architectures and invariably built on top of database management systems responsible
for the storage and provision of the entire business data. Database management
systems therefore play a vital role in today’s organizations, from their reliability
and availability directly depends the overall system dependability.
Replication is a well known technique to improve dependability. By maintaining
consistent replicas of a database one can increase its fault tolerance and simultaneously
improve system’s performance by splitting the workload among the
replicas.
In this thesis we address these issues by exploiting the partial replication of databases.
We target large scale systems where replicas are distributed across wide
area networks aiming at both fault tolerance and fast local access to data. In particular,
we envision information systems of multinational organizations presenting
strong access locality in which fully replicated data should be kept to a minimum
and a judicious placement of replicas should be able to allow the full recovery of
any site in case of failure.
Our research departs from work on database replication algorithms based on group
communication protocols, in detail, multi-master certification-based protocols. At
the core of these protocols resides a total order multicast primitive responsible for
establishing a total order of transaction execution.
A well known performance optimization in local area networks exploits the fact
that often the definitive total order of messages closely following the spontaneous
network order, thus making it possible to optimistically proceed in parallel with
the ordering protocol. Unfortunately, this optimization is invalidated in wide area
networks, precisely when the increased latency would make it more useful. To
overcome this we present a novel total order protocol with optimistic delivery for
wide area networks. Our protocol uses local statistic estimates to independently
order messages closely matching the definitive one thus allowing optimistic execution
in real wide area networks.
Handling partial replication within a certification based protocol is also particularly
challenging as it directly impacts the certification procedure itself. Depending
on the approach, the added complexity may actually defeat the purpose
of partial replication. We devise, implement and evaluate two variations of the
Database State Machine protocol discussing their benefits and adequacy with the
workload of the standard TPC-C benchmark.Os sistemas de informação empresariais actuais estruturam-se normalmente em
arquitecturas de software multi-nível, e apoiam-se invariavelmente sobre um sistema
de gestão de bases de dados para o armazenamento e aprovisionamento de
todos os dados do negócio. A base de dado desempenha assim um papel vital,
sendo a confiabilidade do sistema directamente dependente da sua fiabilidade e
disponibilidade.
A replicação é uma das formas de melhorar a confiabilidade. Garantindo a coerência
de um conjunto de réplicas da base de dados, é possível aumentar simultaneamente
a sua tolerância a faltas e o seu desempenho, ao distribuir as tarefas a
realizar pelas várias réplicas não sobrecarregando apenas uma delas.
Nesta tese, propomos soluções para estes problemas utilizando a replicação parcial
das bases de dados. Nos sistemas considerados, as réplicas encontram-se
distribuídas numa rede de larga escala, almejando-se simultaneamente obter tolerância
a faltas e garantir um acesso local rápido aos dados. Os sistemas propostos
têm como objectivo adequarem-se às exigências dos sistemas de informação de
multinacionais em que em cada réplica existe uma elevada localidade dos dados
acedidos. Nestes sistemas, os dados replicados em todas as réplicas devem ser
apenas os absolutamente indispensáveis, e a selecção criteriosa dos dados a colocar
em cada réplica, deve permitir em caso de falha a reconstrução completa da
base de dados.
Esta investigação tem como ponto de partida os protocolos de replicação de bases
de dados utilizando comunicação em grupo, em particular os baseados em certificação
e execução optimista por parte de qualquer uma das réplicas. O mecanismo
fundamental deste tipo de protocolos de replicação é a primitiva de difusão
com garantia de ordem total, utilizada para definir a ordem de execução das
transacções.
Uma optimização normalmente utilizada pelos protocolos de ordenação total é a
utilização da ordenação espontânea da rede como indicador da ordem das mensagens,
e usar esta ordem espontânea para processar de forma optimista as mensagens
em paralelo com a sua ordenação. Infelizmente, em redes de larga escala
a espontaneidade de rede é praticamente residual, inviabilizando a utilização
desta optimização precisamente no cenário em que a sua utilização seria mais
vantajosa. Para contrariar esta adversidade propomos um novo protocolo de ordenação
total com entrega optimista para redes de larga escala. Este protocolo
utiliza informação estatística local a cada processo para "produzir" uma ordem
espontânea muito mais coincidente com a ordem total obtida viabilizando a utilização
deste tipo de optimizações em redes de larga escala. Permitir que protocolos de replicação de bases de dados baseados em certificação
suportem replicação parcial coloca vários desafios que afectam directamente a
forma com é executado o procedimento de certificação. Dependendo da abordagem
à replicação parcial, a complexidade gerada pode até comprometer os
propósitos da replicação parcial. Esta tese concebe, implementa e avalia duas variantes
do protocolo da database state machine com suporte para replicação parcial,
analisando os benefícios e adequação da replicação parcial ao teste padronizado
de desempenho de bases de dados, o TPC-C.Fundação para a Ciência e a Tecnologia (FCT) - ESCADA (POSI/CHS/33792/2000)
Proactive software rejuvenation solution for web enviroments on virtualized platforms
The availability of the Information Technologies for everything, from everywhere, at all times is a growing requirement. We use information Technologies from common and social tasks to critical tasks like managing nuclear power plants or even the International Space Station (ISS). However, the availability of IT infrastructures is still a huge challenge nowadays. In a quick look around news, we can find reports of corporate outage, affecting millions of users and impacting on the revenue and image of the companies.
It is well known that, currently, computer system outages are more often due to software faults, than hardware faults. Several studies have reported that one of the causes of unplanned software outages is the software aging phenomenon. This term refers to the accumulation of errors, usually causing resource contention, during long running application executions, like web applications, which normally cause applications/systems to hang or crash. Gradual performance degradation could also accompany software aging phenomena. The software aging phenomena are often related to memory bloating/ leaks, unterminated threads, data corruption, unreleased file-locks or overruns. We can find several examples of software aging in the industry.
The work presented in this thesis aims to offer a proactive and predictive software rejuvenation solution for Internet Services against software aging caused by resource exhaustion. To this end, we first present a threshold based proactive rejuvenation to avoid the consequences of software aging. This first approach has some limitations, but the most important of them it is the need to know a priori the resource or resources involved in the crash and the critical condition values. Moreover, we need some expertise to fix the threshold value to trigger the rejuvenation action. Due to these limitations, we have evaluated the use of Machine Learning to overcome the weaknesses of our first approach to obtain a proactive and predictive solution.
Finally, the current and increasing tendency to use virtualization technologies to improve the resource utilization has made traditional data centers turn into virtualized data centers or platforms. We have used a Mathematical Programming approach to virtual machine allocation and migration to optimize the resources, accepting as many services as possible on the platform while at the same time, guaranteeing the availability (via our software rejuvenation proposal) of the services deployed against the software aging phenomena.
The thesis is supported by an exhaustive experimental evaluation that proves the effectiveness and feasibility of our proposals for current systems
Resilience-Building Technologies: State of Knowledge -- ReSIST NoE Deliverable D12
This document is the first product of work package WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellenc
- …