1,335 research outputs found
"Nested and Overlapping Regimes in the Transatlantic Banana Trade Dispute"
The decade long trans-Atlantic banana dispute was not a traditional trade conflict stemming from antagonistic producers’ interests. Instead, this article argues that the banana dispute is one of the most complex illustrations of the legal and political difficulties created by the nesting and overlapping of international institutions and commitments. The contested Europe-wide banana policy was an artifact of nesting--the fruit of efforts to reconcile the single market with Lomé obligations which then ran afoul of WTO rules. Using counter-factual analysis, this article explores how the nesting of international commitments contributed to creating the dispute, provided forum shopping opportunities which themselves complicated the options of decisionmakers, and hindered resolution of what would otherwise be a pretty straightforward trade dispute. We then draw out implications from this case for the EU, an institution increasingly nested within multilateral mechanisms, and for the issue of the nesting of international institutions in general
Improved Solutions for Multidimensional Approximate Agreement via Centroid Computation
In this paper, we present distributed fault-tolerant algorithms that
approximate the centroid of a set of n data points in . Our work
falls into the broader area of approximate multidimensional Byzantine
agreement. The standard approach used in existing algorithms is to agree on a
vector inside the convex hull of all correct vectors. This strategy dismisses
many possibly correct data points. As a result, the algorithm does not
necessarily agree on a representative value. In fact, this does not allow us to
compute a better approximation than of the centroid in the synchronous
case.
To find better approximation algorithms for the centroid, we investigate the
trade-off between the quality of the approximation, the resilience of the
algorithm, and the validity of the solution. For the synchronous case, we show
that it is possible to achieve a -approximation of the centroid with up to
Byzantine data points. This approach however does not give any
guarantee on the validity of the solution. Therefore, we develop a second
approach that reaches a -approximation of the centroid, while
satisfying the standard validity condition for agreement protocols. We are even
able to restrict the validity condition to agreement inside the box of correct
data points, while achieving optimal resilience of . For the
asynchronous case, we can adapt all three algorithms to reach the same
approximation results (up to a constant factor). Our results suggest that it is
reasonable to study the trade-off between validity conditions and the quality
of the solution
Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 1: Army fault tolerant architecture overview
Digital computing systems needed for Army programs such as the Computer-Aided Low Altitude Helicopter Flight Program and the Armored Systems Modernization (ASM) vehicles may be characterized by high computational throughput and input/output bandwidth, hard real-time response, high reliability and availability, and maintainability, testability, and producibility requirements. In addition, such a system should be affordable to produce, procure, maintain, and upgrade. To address these needs, the Army Fault Tolerant Architecture (AFTA) is being designed and constructed under a three-year program comprised of a conceptual study, detailed design and fabrication, and demonstration and validation phases. Described here are the results of the conceptual study phase of the AFTA development. Given here is an introduction to the AFTA program, its objectives, and key elements of its technical approach. A format is designed for representing mission requirements in a manner suitable for first order AFTA sizing and analysis, followed by a discussion of the current state of mission requirements acquisition for the targeted Army missions. An overview is given of AFTA's architectural theory of operation
CICM: A Collaborative Integrity Checking Blockchain Consensus Mechanism for Preserving the Originality of Data the Cloud for Forensic Investigation
The originality of data is very important for achieving correct results from forensic analysis of data for resolving the issue. Data may be analysed to resolve disputes or review issues by finding trends in the dataset that can give clues to the cause of the issue. Specially designed foolproof protection for data integrity is required for forensic purposes. Collaborative Integrity Checking Mechanism (CICM), for securing the chain-of-custody of data in a blockchain is proposed in this paper. Existing consensus mechanisms are fault-tolerant, allowing a threshold for faults. CICM avoids faults by using a transparent 100% agreement process for validating the originality of data in a blockchain. A group of agreement actors check and record the original status of data at its time of arrival. Acceptance is based on general agreement by all the participants in the consensus process. The solution was tested against practical byzantine fault tolerant (PBFT), Zyzzyva, and hybrid byzantine fault tolerant (hBFT) mechanisms for efficacy to yield correct results and operational performance costs. Binomial distribution was used to examine the CICM efficacy. CICM recorded zero probability of failure while the benchmarks recorded up to 8.44%. Throughput and latency were used to test its operational performance costs. The hBFT recorded the best performance among the benchmarks. CICM achieved 30.61% higher throughput and 21.47% lower latency than hBFT. In the robustness against faults tests, CICM performed better than hBFT with 16.5% higher throughput and 14.93% lower latency than the hBFT in the worst-case fault scenario
Threshold Encrypted Mempools: Limitations and Considerations
Encrypted mempools are a class of solutions aimed at preventing or reducing
negative externalities of MEV extraction using cryptographic privacy. Mempool
encryption aims to hide information related to pending transactions until a
block including the transactions is committed, targeting the prevention of
frontrunning and similar behaviour. Among the various methods of encryption,
threshold schemes are particularly interesting for the design of MEV mitigation
mechanisms, as their distributed nature and minimal hardware requirements
harmonize with a broader goal of decentralization.
This work looks beyond the formal and technical cryptographic aspects of
threshold encryption schemes to focus on the market and incentive implications
of implementing encrypted mempools as MEV mitigation techniques. In particular,
this paper argues that the deployment of such protocols without proper
consideration and understanding of market impact invites several undesired
outcomes, with the ultimate goal of stimulating further analysis of this class
of solutions outside of pure cryptograhic considerations. Included in the paper
is an overview of a series of problems, various candidate solutions in the form
of mempool encryption techniques with a focus on threshold encryption,
potential drawbacks to these solutions, and Osmosis as a case study. The paper
targets a broad audience and remains agnostic to blockchain design where
possible while drawing from mostly financial examples
Distributed Storage with Strong Data Integrity based on Blockchain Mechanisms
Master's thesis in Computer scienceA blockchain is a datastructure that is an append-only chain of blocks. Each
block contains a set of transaction and has a cryptographic link back to
its predecessor. The cryptographic link serves to protect the integrity of
the blockchain. A key property of blockchain systems is that it allows mu-
tually distrusting entities to reach consensus over a unique order in which
transactions are appended. The most common usage of blockchains is in
cryptocurrencies such as Bitcoin.
In this thesis we use blockchain technology to design a scalable architec-
ture for a storage system that can provide strong data integrity and ensure the
permanent availability of the data. We study recent literature in blockchain
and cryptography to identify the desired characteristics of such a system. In
comparison to similar systems, we are able to gain increased performance by
designing ours around a permissioned blockchain, allowing only a predefined
set of nodes to write to the ledger. A prototype of the system is built on top
of existing open-source software. An experimental evaluation using different
quorum sizes of the prototype is also presented
Equality, Social Welfare and Equal Protection
As my contribution to this forum, I thought I would try to make a few tentative distinctions concerning the various tasks judges and commentators seek to assign to the Equal Protection Clause. Approaching it from this perspective spares me the necessity of getting into what one of the earlier speakers described as the more Byzantine details of current equal protection doctrine. Such a discussion would inevitably lead to criticisms of the Judiciary and certain commentators, to comparisons between what some might call the liberal and conservative approaches, and to discussion concerning the needs of a changing and dynamic society.
Each of these topics would be an interesting subject in its own right, but I do not think that extended discussion of any or all of them will get us any closer to a clear understanding of what it is that the Equal Protection Clause is supposed to do
Spartan Daily, April 30, 1974
Volume 62, Issue 43https://scholarworks.sjsu.edu/spartandaily/5867/thumbnail.jp
- …