200 research outputs found
Scalable low latency consensus for blockchains
Tese de mestrado, Segurança InformĂĄtica, Universidade de Lisboa; Faculdade de CiĂȘncias, 2021State machine replication (SMR) is a classical technique to implement consistent and faultÂtolerant
replicated services. This type of system is usually built on top of consensus protocols that have high
throughput but have problems scaling to settings with a large number of participants or wideÂarea sce narios due to the required number of messages exchanged to reach a consensus.
We propose ProBFT (Probabilistic Byzantine Fault Tolerance), a consensus protocol specifically de signed to tackle the scalability problem of BFT protocols. ProBFT is a consensus protocol with optimal
latency (three communication steps, as in PBFT) but with a reduced number of messages exchanged
in each phase (O(n
â
n) instead of PBFTâs O(n
2
)). ProBFT is a probabilistic protocol built on top of
wellÂknown primitives, such as probabilistic Byzantine quorums and verifiable random functions, which
provides high probabilities of safety and liveness when the overwhelming majority of replicas is correct.
We also propose a state machine replication protocol called PROBER (PRObabilistic ByzantinE
Replication) that builds on top of two consensus protocols, ProBFT and PBFT. PROBER makes use
of ProBFT to provide fast and probabilistic replies to the clients and uses PBFT to eventually determinis tically commit the history of operations guaranteeing that the system will not roll back the requests after
such commit. This periodic deterministic commit allows the clients to enjoy the low latency provided by
ProBFT while still having the guarantees provided by a deterministic protocol.
We provide a detailed description of both protocols and analyse the probabilities for safety and live ness depending on the current number of Byzantine replicas
Enforced Development Of The Earth's Atmosphere
We review some basic issues of the life-prescribed development of the Earth's
system and the Earth's atmosphere and discourse the unity of Earth's type of
life in physical and transcendental divisions. In physical division, we
exemplify and substantiate the origin of atmospheric phenomena in the metabolic
pathways acquired by the Earth's life forms. We are especially concerned with
emergence of pro-life superficial environments under elaboration of the energy
transformations. Analysis of the coupling phenomena of elaborated ozone-oxygen
transformation and Arctic bromine explosion is provided. Sensing is a
foundation of life and the Earth's life. We offer our explanation of human-like
perception, reasoning and creativity. We suggest a number of propositions about
association of transcendental and physical divisions and the purpose of
existence. The study relates to the tradition of natural philosophy which it
follows. The paper is suitable for the popular reading.Comment: 56 pages, incl. 6 excerpts from Plato, Leibniz, Goethe, Franz Marc
and Lynn Marguli
Some Garbage In - Some Garbage Out: Asynchronous t-Byzantine as Asynchronous Benign t-resilient system with fixed t-Trojan-Horse Inputs
We show that asynchronous faults Byzantine system is equivalent to
asynchronous -resilient system, where unbeknownst to all, the private inputs
of at most processors were altered and installed by a malicious oracle.
The immediate ramification is that dealing with asynchronous Byzantine
systems does not call for new topological methods, as was recently employed by
various researchers: Asynchronous Byzantine is a standard asynchronous system
with an input caveat. It also shows that two recent independent investigations
of vector -agreement in the Byzantine model, and then in the
fail-stop model, one was superfluous - in these problems the change of
inputs allowed in the Byzantine has no effect compared to the fail-stop case.
This result was motivated by the aim of casting any asynchronous system as a
synchronous system where all processors are correct and it is the communication
substrate in the form of message-adversary that misbehaves. Thus, in addition,
we get such a characterization for the asynchronous Byzantine system.Comment: 14 page
Modelling growth/no growth interface of Zygosaccharomyces bailii in simulated acid sauces as a function of natamycin, xanthan gum and sodium chloride concentrations
Probabilistic microbial modelling using logistic regression was used to predict the growth/no growth (G/NG) interfaces of Zygosaccharomyces bailii in simulated acid sauces as a function of natamycin, xanthan gum (XG) and sodium chloride concentrations. The growth was assessed colorimetrically by using 2-(4-iodophenyl)-3-(4-nitrophenyl)-5-phenyl-2H-tetrazolium chloride and 2-methoxy-1,4-naphthoquinone as detection reagents. The logistic regression model successfully predicted G/NG probability. The detection reagents used allowed the evaluation of G/NG interfaces in opaque systems with an excellent agreement with the plate count method. Natamycin concentration of 12 mg/L was needed to inhibit Z. bailii growth independently of the presence of XG and/or NaCl. Addition of 3.00 and 6.00% of NaCl exerted an antagonistic effect on natamycin action. Furthermore, addition of 0.25 and 0.50% XG decreased natamycin and/or NaCl action. However, an increased in XG concentration to 1.00% decreased yeast growth. Mentioned results highlighted the importance of the correct selection of stress factors applied to inhibit Z. bailii growth.Fil: Zalazar, Aldana Lourdes. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Industrias; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; ArgentinaFil: Gliemmo, MarĂa Fernanda. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Industrias; Argentina. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Industrias. Instituto de TecnologĂa de Alimentos y Procesos QuĂmicos. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Ciudad Universitaria. Instituto de TecnologĂa de Alimentos y Procesos QuĂmicos; ArgentinaFil: Soria, Marcelo. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Parque Centenario. Instituto de Investigaciones en Biociencias AgrĂcolas y Ambientales. Universidad de Buenos Aires. Facultad de AgronomĂa. Instituto de Investigaciones en Biociencias AgrĂcolas y Ambientales; ArgentinaFil: Campos, Carmen Adriana. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Industrias; Argentina. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Industrias. Instituto de TecnologĂa de Alimentos y Procesos QuĂmicos. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Ciudad Universitaria. Instituto de TecnologĂa de Alimentos y Procesos QuĂmicos; Argentin
Byzantine fault-tolerant agreement protocols for wireless Ad hoc networks
Tese de doutoramento, InformĂĄtica (CiĂȘncias da Computação), Universidade de Lisboa, Faculdade de CiĂȘncias, 2010.The thesis investigates the problem of fault- and intrusion-tolerant consensus
in resource-constrained wireless ad hoc networks. This is a fundamental
problem in distributed computing because it abstracts the need
to coordinate activities among various nodes. It has been shown to be a
building block for several other important distributed computing problems
like state-machine replication and atomic broadcast.
The thesis begins by making a thorough performance assessment of existing
intrusion-tolerant consensus protocols, which shows that the performance
bottlenecks of current solutions are in part related to their system
modeling assumptions. Based on these results, the communication failure
model is identified as a model that simultaneously captures the reality
of wireless ad hoc networks and allows the design of efficient protocols.
Unfortunately, the model is subject to an impossibility result stating that
there is no deterministic algorithm that allows n nodes to reach agreement
if more than n2 omission transmission failures can occur in a communication
step. This result is valid even under strict timing assumptions (i.e.,
a synchronous system).
The thesis applies randomization techniques in increasingly weaker variants
of this model, until an efficient intrusion-tolerant consensus protocol
is achieved. The first variant simplifies the problem by restricting the
number of nodes that may be at the source of a transmission failure at
each communication step. An algorithm is designed that tolerates f dynamic
nodes at the source of faulty transmissions in a system with a total
of n 3f + 1 nodes.
The second variant imposes no restrictions on the pattern of transmission
failures. The proposed algorithm effectively circumvents the Santoro-
Widmayer impossibility result for the first time. It allows k out of n nodes
to decide despite dn
2 e(nk)+k2 omission failures per communication
step. This algorithm also has the interesting property of guaranteeing
safety during arbitrary periods of unrestricted message loss.
The final variant shares the same properties of the previous one, but relaxes
the model in the sense that the system is asynchronous and that a
static subset of nodes may be malicious. The obtained algorithm, called
Turquois, admits f < n
3 malicious nodes, and ensures progress in communication
steps where dnf
2 e(n k f) + k 2. The algorithm is
subject to a comparative performance evaluation against other intrusiontolerant
protocols. The results show that, as the system scales, Turquois
outperforms the other protocols by more than an order of magnitude.Esta tese investiga o problema do consenso tolerante a faltas acidentais
e maliciosas em redes ad hoc sem fios. Trata-se de um problema fundamental
que captura a essĂȘncia da coordenação em actividades envolvendo
vĂĄrios nĂłs de um sistema, sendo um bloco construtor de outros importantes
problemas dos sistemas distribuĂdos como a replicação de mĂĄquina
de estados ou a difusĂŁo atĂłmica.
A tese começa por efectuar uma avaliação de desempenho a protocolos
tolerantes a intrusÔes jå existentes na literatura. Os resultados mostram
que as limitaçÔes de desempenho das soluçÔes existentes estão em parte
relacionadas com o seu modelo de sistema. Baseado nestes resultados, Ă©
identificado o modelo de falhas de comunicação como um modelo que simultaneamente
permite capturar o ambiente das redes ad hoc sem fios e
projectar protocolos eficientes. Todavia, o modelo Ă© restrito por um resultado
de impossibilidade que afirma nĂŁo existir algoritmo algum que permita
a n nĂłs chegaram a acordo num sistema que admita mais do que n2
transmissÔes omissas num dado passo de comunicação. Este resultado é
vĂĄlido mesmo sob fortes hipĂłteses temporais (i.e., em sistemas sĂncronos)
A tese aplica técnicas de aleatoriedade em variantes progressivamente
mais fracas do modelo até ser alcançado um protocolo eficiente e tolerante
a intrusÔes. A primeira variante do modelo, de forma a simplificar
o problema, restringe o nĂșmero de nĂłs que estĂŁo na origem de transmissĂ”es
faltosas. Ă apresentado um algoritmo que tolera f nĂłs dinĂąmicos na
origem de transmissÔes faltosas em sistemas com um total de n 3f + 1
nĂłs.
A segunda variante do modelo não impÔe quaisquer restriçÔes no padrão
de transmissÔes faltosas. à apresentado um algoritmo que contorna efectivamente
o resultado de impossibilidade Santoro-Widmayer pela primeira
vez e que permite a k de n nós efectuarem progresso nos passos de comunicação
em que o nĂșmero de transmissĂ”es omissas seja dn
2 e(n
k) + k 2. O algoritmo possui ainda a interessante propriedade de tolerar
perĂodos arbitrĂĄrios em que o nĂșmero de transmissĂ”es omissas seja
superior a .
A Ășltima variante do modelo partilha das mesmas caracterĂsticas da variante
anterior, mas com pressupostos mais fracos sobre o sistema. Em particular,
assume-se que o sistema Ă© assĂncrono e que um subconjunto estĂĄtico
dos nĂłs pode ser malicioso. O algoritmo apresentado, denominado
Turquois, admite f < n
3 nĂłs maliciosos e assegura progresso nos passos
de comunicação em que dnf
2 e(n k f) + k 2. O algoritmo Ă©
sujeito a uma anĂĄlise de desempenho comparativa com outros protocolos
na literatura. Os resultados demonstram que, Ă medida que o nĂșmero de
nĂłs no sistema aumenta, o desempenho do protocolo Turquois ultrapassa
os restantes em mais do que uma ordem de magnitude.FC
Can 100 Machines Agree?
Agreement protocols have been typically deployed at small scale, e.g., using
three to five machines. This is because these protocols seem to suffer from a
sharp performance decay. More specifically, as the size of a deployment---i.e.,
degree of replication---increases, the protocol performance greatly decreases.
There is not much experimental evidence for this decay in practice, however,
notably for larger system sizes, e.g., beyond a handful of machines.
In this paper we execute agreement protocols on up to 100 machines and
observe on their performance decay. We consider well-known agreement protocols
part of mature systems, such as Apache ZooKeeper, etcd, and BFT-Smart, as well
as a chain and a novel ring-based agreement protocol which we implement
ourselves.
We provide empirical evidence that current agreement protocols execute
gracefully on 100 machines. We observe that throughput decay is initially sharp
(consistent with previous observations); but intriguingly---as each system
grows beyond a few tens of replicas---the decay dampens. For chain- and
ring-based replication, this decay is slower than for the other systems. The
positive takeaway from our evaluation is that mature agreement protocol
implementations can sustain out-of-the-box 300 to 500 requests per second when
executing on 100 replicas on a wide-area public cloud platform. Chain- and
ring-based replication can reach between 4K and 11K (up to 20x improvements)
depending on the fault assumptions
Distributed virtual environment scalability and security
Distributed virtual environments (DVEs) have been an active area of research and engineering for more than 20 years. The most widely deployed DVEs are network games such as Quake, Halo, and World of Warcraft (WoW), with millions of users and billions of dollars in annual revenue. Deployed DVEs remain expensive centralized implementations despite significant research outlining ways to distribute DVE workloads.
This dissertation shows previous DVE research evaluations are inconsistent with deployed DVE needs. Assumptions about avatar movement and proximity - fundamental scale factors - do not match WoWâs workload, and likely the workload of other deployed DVEs. Alternate workload models are explored and preliminary conclusions presented. Using realistic workloads it is shown that a fully decentralized DVE cannot be deployed to todayâs consumers, regardless of its overhead.
Residential broadband speeds are improving, and this limitation will eventually disappear. When it does, appropriate security mechanisms will be a fundamental requirement for technology adoption.
A trusted auditing system (âCarbonâ) is presented which has good security, scalability, and resource characteristics for decentralized DVEs. When performing exhaustive auditing, Carbon adds 27% network overhead to a decentralized DVE with a WoW-like workload. This resource consumption can be reduced significantly, depending upon the DVEâs risk tolerance.
Finally, the Pairwise Random Protocol (PRP) is described. PRP enables adversaries to fairly resolve probabilistic activities, an ability missing from most decentralized DVE security proposals.
Thus, this dissertations contribution is to address two of the obstacles for deploying research on decentralized DVE architectures. First, lack of evidence that research results apply to existing DVEs. Second, the lack of security systems combining appropriate security guarantees with acceptable overhead
Distributed Protocols with Threshold and General Trust Assumptions
Distributed systems today power almost all online applications. Consequently, a wide range of distributed protocols, such as consensus, and distributed cryptographic primitives are being researched and deployed in practice. This thesis addresses multiple aspects of distributed protocols and cryptographic schemes, enhancing their resilience, efficiency, and scalability.
Fundamental to every secure distributed protocols are its trust assumptions. These assumptions not only measure a protocol's resilience but also determine its scope of application, as well as, in some sense, the expressiveness and freedom of the participating parties. Dominant in practice is so far the threshold setting, where at most some f out of the n parties may fail in any execution. However, in this setting, all parties are viewed as identical, making correlations indescribable. These constraints can be surpassed with general trust assumptions, which allow arbitrary sets of parties to fail in an execution. Despite significant theoretical efforts, relevant practical aspects of this setting are yet to be addressed. Our work fills this gap. We show how general trust assumptions can be efficiently specified, encoded, and used in distributed protocols and cryptographic schemes. Additionally, we investigate a consensus protocol and distributed cryptographic schemes with general trust assumptions. Moreover, we show how the general trust assumptions of different systems, with intersecting or disjoint sets of participants, can be composed into a unified system.
When it comes to decentralized systems, such as blockchains, efficiency and scalability are often compromised due to the total ordering of all user transactions. Guerraoui (Distributed Computing, 2022) have contradicted the common design of major blockchains, proving that consensus is not required to prevent double-spending in a cryptocurrency. Modern blockchains support a variety of distributed applications beyond cryptocurrencies, which let users execute arbitrary code in a distributed and decentralized fashion. In this work we explore the synchronization requirements of a family of Ethereum smart contracts and formally establish the subsets of participants that need to synchronize their transactions.
Moreover, a common requirement of all asynchronous consensus protocols is randomness. A simple and efficient approach is to employ threshold cryptography for this. However, this necessitates in practice a distributed setup protocol, often leading to performance bottlenecks. Blum (TCC 2020) propose a solution bypassing this requirement, which is, however, practically inefficient, due to the employment of fully homomorphic encryption. Recognizing that randomness for consensus does not need to be perfect (that is, always unpredictable and agreed-upon) we propose a practical and concretely-efficient protocol for randomness generation.
Lastly, this thesis addresses the issue of deniability in distributed systems. The problem arises from the fact that a digital signature authenticates a message for an indefinite period. We introduce a scheme that allows the recipients to verify signatures, while allowing plausible deniability for signers. This scheme transforms a polynomial commitment scheme into a digital signature scheme
Waddington's Landscapes in the Bacterial World
Conrad Waddington's epigenetic landscape, a visual metaphor for the development of multicellular organisms, is appropriate to depict the formation of phenotypic variants of bacterial cells. Examples of bacterial differentiation that result in morphological change have been known for decades. In addition, bacterial populations contain phenotypic cell variants that lack morphological change, and the advent of fluorescent protein technology and single-cell analysis has unveiled scores of examples. Cell-specific gene expression patterns can have a random origin or arise as a programmed event. When phenotypic cell-to-cell differences are heritable, bacterial lineages are formed. The mechanisms that transmit epigenetic states to daughter cells can have strikingly different levels of complexity, from the propagation of simple feedback loops to the formation of complex DNA methylation patterns. Game theory predicts that phenotypic heterogeneity can facilitate bacterial adaptation to hostile or unpredictable environments, serving either as a division of labor or as a bet hedging that anticipates future challenges. Experimental observation confirms the existence of both types of strategies in the bacterial world.España Ministerio de Ciencia e Innovación Grant BIO2016- 75235-
Risk Measurement, Risk Management and Capital Adequacy in Financial Conglomerates
Is there something special, with respect to risk and capital, about a financial conglomerate that combines banking, insurance and potentially other financial and non-financial activities? To what degree is the risk of the whole less than the sum of its parts? This paper seeks to address these questions by evaluating the risk profile of a typical banking-insurance conglomerate, highlighting the key analytical issues relating to risk aggregation, and raising policy considerations. Risk aggregation is the main analytical hurdle to arriving at a composite risk picture. We propose a "building block" approach that aggregates risk at three successive levels in an organization, (corresponding to the levels at which risk is typically managed). Empirically, diversification effects are greatest within a single risk factor (Level I), decrease at the business line level (Level II), and are smallest across business lines (Level III). Our estimates suggest that the incremental diversification benefits achievable at Level III are modest, around 5-10% reduction in capital requirements, depending on business mix.Economic capital, financial regulation, risk aggregation
- âŠ