48 research outputs found
Report on the 6th ADBIS’2002 conference
The 6th East European Conference ADBIS 2002 was held on September~8--11, 2002 in Bratislava, Slovakia. It was organised by the Slovak University of Technology (and, in particular, its Faculty of Electrical Engineering and Information Technology) in Bratislava in co-operation with the ACM SIGMOD, the Moscow ACM SIGMOD Chapter, and Slovak Society for Computer Science. The call for papers attracted 115 submissions from 35~countries. The international program committee, consisting of 43 researchers from 21 countries, selected 25 full papers and 4 short papers for a monograph volume published by the Springer Verlag. Beside those 29 regular papers, the volume includes also 3 invited papers presented at the Conference as invited lectures. Additionally, 20 papers have been selected for the Research communications volume. The authors of accepted papers come from 22~countries of 4 continents, indicating the truly international recognition of the ADBIS conference series. The conference had 104 registered participants from 22~countries and included invited lectures, tutorials, and regular sessions. This report describes the goals of the conference and summarizes the issues discussed during the sessions
Partial replication in the database state machine
Tese de Doutoramento em Informática - Ramo do Conhecimento em Tecnologias da ProgramaçãoEnterprise information systems are nowadays commonly structured as multi-tier
architectures and invariably built on top of database management systems responsible
for the storage and provision of the entire business data. Database management
systems therefore play a vital role in today’s organizations, from their reliability
and availability directly depends the overall system dependability.
Replication is a well known technique to improve dependability. By maintaining
consistent replicas of a database one can increase its fault tolerance and simultaneously
improve system’s performance by splitting the workload among the
replicas.
In this thesis we address these issues by exploiting the partial replication of databases.
We target large scale systems where replicas are distributed across wide
area networks aiming at both fault tolerance and fast local access to data. In particular,
we envision information systems of multinational organizations presenting
strong access locality in which fully replicated data should be kept to a minimum
and a judicious placement of replicas should be able to allow the full recovery of
any site in case of failure.
Our research departs from work on database replication algorithms based on group
communication protocols, in detail, multi-master certification-based protocols. At
the core of these protocols resides a total order multicast primitive responsible for
establishing a total order of transaction execution.
A well known performance optimization in local area networks exploits the fact
that often the definitive total order of messages closely following the spontaneous
network order, thus making it possible to optimistically proceed in parallel with
the ordering protocol. Unfortunately, this optimization is invalidated in wide area
networks, precisely when the increased latency would make it more useful. To
overcome this we present a novel total order protocol with optimistic delivery for
wide area networks. Our protocol uses local statistic estimates to independently
order messages closely matching the definitive one thus allowing optimistic execution
in real wide area networks.
Handling partial replication within a certification based protocol is also particularly
challenging as it directly impacts the certification procedure itself. Depending
on the approach, the added complexity may actually defeat the purpose
of partial replication. We devise, implement and evaluate two variations of the
Database State Machine protocol discussing their benefits and adequacy with the
workload of the standard TPC-C benchmark.Os sistemas de informação empresariais actuais estruturam-se normalmente em
arquitecturas de software multi-nível, e apoiam-se invariavelmente sobre um sistema
de gestão de bases de dados para o armazenamento e aprovisionamento de
todos os dados do negócio. A base de dado desempenha assim um papel vital,
sendo a confiabilidade do sistema directamente dependente da sua fiabilidade e
disponibilidade.
A replicação é uma das formas de melhorar a confiabilidade. Garantindo a coerência
de um conjunto de réplicas da base de dados, é possível aumentar simultaneamente
a sua tolerância a faltas e o seu desempenho, ao distribuir as tarefas a
realizar pelas várias réplicas não sobrecarregando apenas uma delas.
Nesta tese, propomos soluções para estes problemas utilizando a replicação parcial
das bases de dados. Nos sistemas considerados, as réplicas encontram-se
distribuídas numa rede de larga escala, almejando-se simultaneamente obter tolerância
a faltas e garantir um acesso local rápido aos dados. Os sistemas propostos
têm como objectivo adequarem-se às exigências dos sistemas de informação de
multinacionais em que em cada réplica existe uma elevada localidade dos dados
acedidos. Nestes sistemas, os dados replicados em todas as réplicas devem ser
apenas os absolutamente indispensáveis, e a selecção criteriosa dos dados a colocar
em cada réplica, deve permitir em caso de falha a reconstrução completa da
base de dados.
Esta investigação tem como ponto de partida os protocolos de replicação de bases
de dados utilizando comunicação em grupo, em particular os baseados em certificação
e execução optimista por parte de qualquer uma das réplicas. O mecanismo
fundamental deste tipo de protocolos de replicação é a primitiva de difusão
com garantia de ordem total, utilizada para definir a ordem de execução das
transacções.
Uma optimização normalmente utilizada pelos protocolos de ordenação total é a
utilização da ordenação espontânea da rede como indicador da ordem das mensagens,
e usar esta ordem espontânea para processar de forma optimista as mensagens
em paralelo com a sua ordenação. Infelizmente, em redes de larga escala
a espontaneidade de rede é praticamente residual, inviabilizando a utilização
desta optimização precisamente no cenário em que a sua utilização seria mais
vantajosa. Para contrariar esta adversidade propomos um novo protocolo de ordenação
total com entrega optimista para redes de larga escala. Este protocolo
utiliza informação estatística local a cada processo para "produzir" uma ordem
espontânea muito mais coincidente com a ordem total obtida viabilizando a utilização
deste tipo de optimizações em redes de larga escala. Permitir que protocolos de replicação de bases de dados baseados em certificação
suportem replicação parcial coloca vários desafios que afectam directamente a
forma com é executado o procedimento de certificação. Dependendo da abordagem
à replicação parcial, a complexidade gerada pode até comprometer os
propósitos da replicação parcial. Esta tese concebe, implementa e avalia duas variantes
do protocolo da database state machine com suporte para replicação parcial,
analisando os benefícios e adequação da replicação parcial ao teste padronizado
de desempenho de bases de dados, o TPC-C.Fundação para a Ciência e a Tecnologia (FCT) - ESCADA (POSI/CHS/33792/2000)
Blockchain in maritime cybersecurity
Blockchain technologies can be used for many different purposes from handling large amounts of data to creating better solutions for privacy protection, user authentication and a tamper proof ledger which lead to growing interest among industries. Smart contracts, fog nodes and different consensus methods create a scalable environment to secure multi-party connections with equal trust of participanting nodes’ identity. Different blockchains have multiple options for methodologies to use in different environments. This thesis has focused on Ethereum based open-source solutions that fit the remote pilotage environment the best.
Autonomous vehicular networks and remote operatable devices have been a popular research topic in the last few years. Remote pilotage in maritime environment is persumed to reach its full potential with fully autonomous vessels in ten years which makes the topic interesting for all researchers. However cybersecurity in these environments is especially important because incidents can lead to financial loss, reputational damage, loss of customer and industry trust and environmental damage. These complex environments also have multiple attack vectors because of the systems wireless nature. Denial-of-service (DoS), man-in-the-middle (MITM), message or executable code injection, authentication tampering and GPS spoofing are one of the most usual attacks against large IoT systems. This is why blockchain can be used for creating a tamper proof environment with no single point-of-failure.
After extensive research about best performing blockchain technologies Ethereum seemed the most preferable for decentralised maritime environment. In comparison to most of 2021 blockchain technologies that have focused on financial industries and cryptocurrencies, Ethereum has focused on decentralizing applications within many different industries. This thesis provides three Ethereum based blockchain protocol solutions and one operating system for these protocols. All have different features that add to the base blockchain technology but after extensive comparison two of these protocols perform better in means of concurrency and privacy. Hyperledger Fabric and Quorum provide many ways of tackling privacy, concurrency and parallel execution issues with consistent high throughput levels. However Hyperledger Fabric has far better throughput and concurrency management. This makes the solution of Firefly operating system with Hyperledger Fabric blockchain protocol the most preferable solution in complex remote pilotage fairway environment
Modeling a Consortium-based Distributed Ledger Network with Applications for Intelligent Transportation Infrastructure
Emerging distributed-ledger networks are changing the landscape for environments of low trust among participating entities. Implementing such technologies in transportation infrastructure communications and operations would enable, in a secure fashion, decentralized collaboration among entities who do not fully trust each other. This work models a transportation records and events data collection system enabled by a Hyperledger Fabric blockchain network and simulated using a transportation environment modeling tool. A distributed vehicle records management use case is shown with the capability to detect and prevent unauthorized vehicle odometer tampering. Another use case studied is that of vehicular data collected during the event of an accident. It relies on broadcast data collected from the Vehicle Ad-hoc Network (VANET) and submitted as witness reports from nearby vehicles or road-side units who observed the event taking place or detected misbehaving activity by vehicles involved in the accident. Mechanisms for the collection, validation, and corroboration of the reported data which may prove crucial for vehicle accident forensics are described and their implementation is discussed. A performance analysis of the network under various loads is conducted with results suggesting that tailored endorsement policies are an effective mechanism to improve overall network throughput for a given channel. The experimental testbed shows that Hyperledger Fabric and other distributed ledger technologies hold promise for the collection of transportation data and the collaboration of applications and services that consume it
Distributed Operating Systems
Distributed operating systems have many aspects in common with centralized ones, but they also differ in certain ways. This paper is intended as an introduction to distributed operating systems, and especially to current university research about them. After a discussion of what constitutes a distributed operating system and how it is distinguished from a computer network, various key design issues are discussed. Then several examples of current research projects are examined in some detail, namely, the Cambridge Distributed Computing System, Amoeba, V, and Eden. © 1985, ACM. All rights reserved
Securing the Internet at the Exchange Points
Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores), 2022, Universidade de Lisboa, Faculdade de CiênciasBGP, the border gateway protocol, is the inter-domain routing protocol that glues the
Internet. Despite its importance, it has well-known security problems. Frequently, the
BGP infrastructure is the target of prefix hijacking and path manipulation attacks. These
attacks disrupt the normal functioning of the Internet by either redirecting the traffic,
potentially allowing eavesdropping, or even preventing it from reaching its destination
altogether, affecting availability.
These problems result from the lack of a fundamental security mechanism: the ability
to validate the information in routing announcements. Specifically, it does not authenticate the prefix origin nor the validity of the announced routes. This means that an intermediate network that intercepts a BGP announcement can maliciously announce an IP
prefix that it does not own as theirs, or insert a bogus path to a prefix with the goal to
intercept traffic.
Several solutions have been proposed in the past, but they all have limitations, of
which the most severe is arguably the requirement to perform drastic changes on the
existing BGP infrastructure (i.e., requiring the replacement of existing equipment). In
addition, most solutions require their widespread adoption to be effective. Finally, they
typically require secure communication channels between the participant routers, which
entails computationally-intensive cryptographic verification capabilities that are normally
unavailable in this type of equipment.
With these challenges in mind, this thesis proposes to investigate the possibility to
improve BGP security by leveraging the software-defined networking (SDN) technology
that is increasingly common at Internet Exchange Points (IXPs). These interconnection
facilities are single locations that typically connect hundreds to thousands of networks,
working as Internet “middlemen” ideally placed to implement inter-network mechanisms,
such as security, without requiring changes to the network operators’ infrastructure. Our
key idea is to include a secure channel between IXPs that, by running in the SDN server
that controls these modern infrastructures, avoids the cryptographic requirements in the
routers. In our solution, the secure channel for communication implements a distributed
ledger (a blockchain), for decentralized trust and its other inherent guarantees. The rationale is that by increasing trust and avoiding expensive infrastructure updates, we hope to
create incentives for operators to adhere to these new IXP-enhanced security services
ACE: Asynchronous and Concurrent Execution of Complex Smart Contracts
Smart contracts are programmable, decentralized and transparent financial applications. Because smart contract platforms typically support Turing-complete programming languages, such systems are often said to enable arbitrary applications. However, the current permissionless smart contract systems impose heavy restrictions on the types of computations that can be implemented. For example, the globally-replicated and sequential execution model of Ethereum requires low gas limits that make many computations infeasible.
In this paper, we propose a novel system called ACE whose main goal is to enable more complex smart contracts on permissionless blockchains. ACE is based on an off-chain execution model where the contract issuers appoint a set of service providers to execute the contract code independent from the consensus layer. The primary advantage of ACE over previous solutions is that it allows one contract to safely call another contract that is executed by a different set of service providers. Thus, ACE is the first solution to enable off-chain execution of interactive smart contracts with flexible trust assumptions. Our evaluation shows that ACE enables several orders of magnitude more complex smart contracts than standard Ethereum
Letter from the Special Issue Editor
Editorial work for DEBULL on a special issue on data management on Storage Class Memory (SCM) technologies
An Introduction to Database Systems
This textbook introduces the basic concepts of database systems. These concepts are presented through numerous examples in modeling and design. The material in this book is geared to an introductory course in database systems offered at the junior or senior level of Computer Science. It could also be used in a first year graduate course in database systems, focusing on a selection of the advanced topics in the latter chapters