16,023 research outputs found
Tree-Chain: A Fast Lightweight Consensus Algorithm for IoT Applications
Blockchain has received tremendous attention in non-monetary applications
including the Internet of Things (IoT) due to its salient features including
decentralization, security, auditability, and anonymity. Most conventional
blockchains rely on computationally expensive consensus algorithms, have
limited throughput, and high transaction delays. In this paper, we propose
tree-chain a scalable fast blockchain instantiation that introduces two levels
of randomization among the validators: i) transaction level where the validator
of each transaction is selected randomly based on the most significant
characters of the hash function output (known as consensus code), and ii)
blockchain level where validator is randomly allocated to a particular
consensus code based on the hash of their public key. Tree-chain introduces
parallel chain branches where each validator commits the corresponding
transactions in a unique ledger. Implementation results show that tree-chain is
runnable on low resource devices and incurs low processing overhead, achieving
near real-time transaction settlement
Reliability and fault tolerance in the European ADS project
After an introduction to the theory of reliability, this paper focuses on a
description of the linear proton accelerator proposed for the European ADS
demonstration project. Design issues are discussed and examples of cases of
fault tolerance are given.Comment: 14 pages, contribution to the CAS - CERN Accelerator School: Course
on High Power Hadron Machines; 24 May - 2 Jun 2011, Bilbao, Spai
Robust geometric forest routing with tunable load balancing
Although geometric routing is proposed as a memory-efficient alternative to traditional lookup-based routing and forwarding algorithms, it still lacks: i) adequate mechanisms to trade stretch against load balancing, and ii) robustness to cope with network topology change.
The main contribution of this paper involves the proposal of a family of routing schemes, called Forest Routing. These are based on the principles of geometric routing, adding flexibility in its load balancing characteristics. This is achieved by using an aggregation of greedy embeddings along with a configurable distance function. Incorporating link load information in the forwarding layer enables load balancing behavior while still attaining low path stretch. In addition, the proposed schemes are validated regarding their resilience towards network failures
Test Set Diameter: Quantifying the Diversity of Sets of Test Cases
A common and natural intuition among software testers is that test cases need
to differ if a software system is to be tested properly and its quality
ensured. Consequently, much research has gone into formulating distance
measures for how test cases, their inputs and/or their outputs differ. However,
common to these proposals is that they are data type specific and/or calculate
the diversity only between pairs of test inputs, traces or outputs.
We propose a new metric to measure the diversity of sets of tests: the test
set diameter (TSDm). It extends our earlier, pairwise test diversity metrics
based on recent advances in information theory regarding the calculation of the
normalized compression distance (NCD) for multisets. An advantage is that TSDm
can be applied regardless of data type and on any test-related information, not
only the test inputs. A downside is the increased computational time compared
to competing approaches.
Our experiments on four different systems show that the test set diameter can
help select test sets with higher structural and fault coverage than random
selection even when only applied to test inputs. This can enable early test
design and selection, prior to even having a software system to test, and
complement other types of test automation and analysis. We argue that this
quantification of test set diversity creates a number of opportunities to
better understand software quality and provides practical ways to increase it.Comment: In submissio
Learning Tractable Probabilistic Models for Fault Localization
In recent years, several probabilistic techniques have been applied to
various debugging problems. However, most existing probabilistic debugging
systems use relatively simple statistical models, and fail to generalize across
multiple programs. In this work, we propose Tractable Fault Localization Models
(TFLMs) that can be learned from data, and probabilistically infer the location
of the bug. While most previous statistical debugging methods generalize over
many executions of a single program, TFLMs are trained on a corpus of
previously seen buggy programs, and learn to identify recurring patterns of
bugs. Widely-used fault localization techniques such as TARANTULA evaluate the
suspiciousness of each line in isolation; in contrast, a TFLM defines a joint
probability distribution over buggy indicator variables for each line. Joint
distributions with rich dependency structure are often computationally
intractable; TFLMs avoid this by exploiting recent developments in tractable
probabilistic models (specifically, Relational SPNs). Further, TFLMs can
incorporate additional sources of information, including coverage-based
features such as TARANTULA. We evaluate the fault localization performance of
TFLMs that include TARANTULA scores as features in the probabilistic model. Our
study shows that the learned TFLMs isolate bugs more effectively than previous
statistical methods or using TARANTULA directly.Comment: Fifth International Workshop on Statistical Relational AI (StaR-AI
2015
- …