208 research outputs found
Analysing and Improving Shard Allocation Protocols for Sharded Blockchains
Sharding is a promising approach to scale permissionless blockchains. In a sharded blockchain, participants are split into groups, called shards, and each shard only executes part of the workloads. Despite its wide adoption in permissioned systems, transferring such success to permissionless blockchains is still an open problem. In permissionless networks, participants may join and leave the system at any time, making load balancing challenging. In addition, the adversary in such networks can launch the single-shard takeover attack by compromising a single shard’s consensus. To address these issues, participants should be securely and dynamically allocated into different shards. However, the protocol capturing such functionality – which we call shard allocation – is overlooked.
In this paper, we study shard allocation protocols for permissionless blockchains. We formally define the shard allocation protocol and propose an evaluation framework. We apply the framework to evaluate the shard allocation subprotocols of seven state-of-the-art sharded blockchains, and show that none of them is fully correct or achieves satisfactory performance. We attribute these deficiencies to their extreme choices between two performance metrics: self-balance and operability. We observe and prove the fundamental trade-off between these two metrics, and identify a new property memory-dependency that enables parameterisation over this trade-off. Based on these insights, we propose Wormhole, a correct and efficient shard allocation protocol with minimal security assumptions and parameterisable self-balance and operability. We implement Wormhole and evaluate its overhead and performance metrics in a network with 128 shards and 32768 nodes. The results show that Wormhole introduces little overhead, achieves consistent self-balance and operability with our theoretical analysis, and allows the system to recover quickly from load imbalance
TICK: Tiny Client for Blockchains
In Bitcoin-like systems, when a payee chooses to accept
zero-confirmation transactions, it needs to verify the validity of
the transaction. In particular, one of the steps is to verify that
each referred output of the transaction is not previously spent. The conventional
lightweight client design can only support such operation in the
complexity of O(), where is the total number
of transactions in the system. This is impractical for lightweight clients.
The latest proposals suggest to summarize all the unspent outputs in
an ordered Merkle tree. Therefore, a light client can request proof of
presence and/or absence of an element in it to prove whether a
referred output is previously spent or not, in the complexity of
O(log()), where is the total number of unspent output in
the system. However, updating such ordered Merkle tree is slow, thus making the system impractical --- by
our evaluation, when a new block is generated in Bitcoin, it takes
more than one minute to update the ordered Merkle tree.
We propose a practical client, TICK, to solve this problem. TICK uses
the AVL hash tree to store all the unspent outputs. The AVL hash tree can be
update in the time of O(M*log()), where is the number of
elements that need to be inserted or removed from the AVL hash tree. By
evaluation, when a new block is generated, the AVL hash tree can be updated
within second. Similarly, the proof can also be generated in the
time of O(log()). Therefore, TICK brings negligible run-time
overhead, and thus it is practical. Benefited by the AVL hash tree, a
storage-limited device can efficiently and cryptographically verify
transactions. In addition, rather than requiring new miners to
download the entire blockchain before mining, TICK allows new miners
to download only a small portion of data to start mining.
We implement TICK for Bitcoin and provide an experimental
evaluation on its performance by using the current Bitcoin
blockchain data. Our result shows that the proof for verifying
whether an output of a transaction is spent or not is only several KB. The verification is very fast -- generating a proof generally takes less than millisecond, and verifying a proof even takes much
less time. In addition, to start mining, new
miners only need to download several GB data, rather than downloading
over 230 GB data
CDDSA: Contrastive Domain Disentanglement and Style Augmentation for Generalizable Medical Image Segmentation
Generalization to previously unseen images with potential domain shifts and
different styles is essential for clinically applicable medical image
segmentation, and the ability to disentangle domain-specific and
domain-invariant features is key for achieving Domain Generalization (DG).
However, existing DG methods can hardly achieve effective disentanglement to
get high generalizability. To deal with this problem, we propose an efficient
Contrastive Domain Disentanglement and Style Augmentation (CDDSA) framework for
generalizable medical image segmentation. First, a disentangle network is
proposed to decompose an image into a domain-invariant anatomical
representation and a domain-specific style code, where the former is sent to a
segmentation model that is not affected by the domain shift, and the
disentangle network is regularized by a decoder that combines the anatomical
and style codes to reconstruct the input image. Second, to achieve better
disentanglement, a contrastive loss is proposed to encourage the style codes
from the same domain and different domains to be compact and divergent,
respectively. Thirdly, to further improve generalizability, we propose a style
augmentation method based on the disentanglement representation to synthesize
images in various unseen styles with shared anatomical structures. Our method
was validated on a public multi-site fundus image dataset for optic cup and
disc segmentation and an in-house multi-site Nasopharyngeal Carcinoma Magnetic
Resonance Image (NPC-MRI) dataset for nasopharynx Gross Tumor Volume (GTVnx)
segmentation. Experimental results showed that the proposed CDDSA achieved
remarkable generalizability across different domains, and it outperformed
several state-of-the-art methods in domain-generalizable segmentation.Comment: 14 pages, 8 figure
Passive Homodyne Phase Demodulation Technique Based on LF-TIT-DCM Algorithm for Interferometric Sensors
A passive homodyne phase demodulation technique based on a linear-fitting trigonometric-identity-transformation differential cross-multiplication (LF-TIT-DCM) algorithm is proposed. This technique relies on two interferometric signals whose interferometric phase difference is odd times of π. It is able to demodulate phase signals with a large dynamic range and wide frequency band. An anti-phase dual wavelength demodulation system is built to prove the LF-TIT-DCM algorithm. Comparing the traditional quadrature dual wavelength demodulation system with an ellipse fitting DCM (EF-DCM) algorithm, the phase difference of two interferometric signals of the anti-phase dual wavelength demodulation system is set to be π instead of π/2. This technique overcomes the drawback of EF-DCM—that it is not able to demodulate small signals since the ellipse degenerates into a straight line and the ellipse fitting algorithm is invalidated. Experimental results show that the dynamic range of the proposed anti-phase dual wavelength demodulation system is much larger than that of the traditional quadrature dual wavelength demodulation system. Moreover, the proposed anti-phase dual wavelength demodulation system is hardly influenced by optical power, and the laser wavelength should be strictly limited to lower the reference error
Large-Dynamic-Range and High-Stability Phase Demodulation Technology for Fiber-Optic Michelson Interferometric Sensors
A large-dynamic-range and high-stability phase demodulation technology for fiber-optic Michelson interferometric sensors is proposed. This technology utilizes two output signals from a 2 × 2 fiber-optic coupler, the interferometric phase difference of which is π. A linear-fitting trigonometric-identity-transformation differential cross-multiplication (LF-TIT-DCM) algorithm is used to interrogate the phase signal from the two output signals from the coupler. The interferometric phase differences from the two output signals from the 2 × 2 fiber-optic couplers with different coupling ratios are all equal to π, which ensures that the LF-TIT-DCM algorithm can be applied perfectly. A fiber-optic Michelson interferometric acoustic sensor is fabricated, and an acoustic signal testing system is built to prove the proposed phase demodulation technology. Experimental results show that excellent linearity is observed from 0.033 rad to 3.2 rad. Moreover, the influence of laser wavelength and optical power is researched, and variation below 0.47 dB is observed at different sound pressure levels (SPLs). Long-term stability over thirty minutes is tested, and fluctuation is less than 0.36 dB. The proposed phase demodulation technology obtains large dynamic range and high stability at rather low cost
Proteomics analysis reveals differentially activated pathways that operate in peanut gynophores at different developmental stages
Specific proteins identified in S3 gynophores. (XLS 91 kb
- …