1,343 research outputs found
SECURITY RESEARCH FOR BLOCKCHAIN IN SMART GRID
Smart grid is a power supply system that uses digital communication technology to detect and react to local changes for power demand. Modern and future power supply system requires a distributed system for effective communication and management. Blockchain, a distributed technology, has been applied in many fields, e.g., cryptocurrency exchange, secure sharing of medical data, and personal identity security. Much research has been done on the application of blockchain to smart grid. While blockchain has many advantages, such as security and no interference from third parties, it also has inherent disadvantages, such as untrusted network environment, lacking data source privacy, and low network throughput.In this research, three systems are designed to tackle some of these problems in blockchain technology. In the first study, Information-Centric Blockchain Model, we focus on data privacy. In this model, the transactions created by nodes in the network are categorized into separate groups, such as billing transactions, power generation transactions, etc. In this model, all transactions are first encrypted by the corresponding pairs of asymmetric keys, which guarantees that only the intended receivers can see the data so that data confidentiality is preserved. Secondly, all transactions are sent on behalf of their groups, which hides the data sources to preserve the privacy. Our preliminary implementation verified the feasibility of the model, and our analysis demonstrates its effectiveness in securing data source privacy, increasing network throughput, and reducing storage usage. In the second study, we focus on increasing the network’s trustworthiness in an untrusted network environment. A reputation system is designed to evaluate all node’s behaviors. The reputation of a node is evaluated on its computing power, online time, defense ability, function, and service quality. The performance of a node will affect its reputation scores, and a node’s reputation scores will be used to assess its qualification, privileges, and job assignments. Our design is a relatively thorough, self-operated, and closed-loop system. Continuing evaluation of all node’s abilities and behaviors guarantees that only nodes with good scores are qualified to handle certain tasks. Thus, the reputation system helps enhance network security by preventing both internal and external attacks. Preliminary implementation and security analysis showed that the reputation model is feasible and enhances blockchain system’s security. In the third research, a countermeasure was designed for double spending. Double spending is one of the two most concerned security attacks in blockchain. In this study, one of the most reputable nodes was selected as detection node, which keeps checking for conflict transactions in two consecutive blocks. Upon a problematic transaction was discovered, two punishment transactions were created to punish the current attack behavior and to prevent it to happen in future. The experiment shows our design can detect the double spending effectively while using much less detection time and resources
Multi-round Master-Worker Computing: a Repeated Game Approach
We consider a computing system where a master processor assigns tasks for
execution to worker processors through the Internet. We model the workers
decision of whether to comply (compute the task) or not (return a bogus result
to save the computation cost) as a mixed extension of a strategic game among
workers. That is, we assume that workers are rational in a game-theoretic
sense, and that they randomize their strategic choice. Workers are assigned
multiple tasks in subsequent rounds. We model the system as an infinitely
repeated game of the mixed extension of the strategic game. In each round, the
master decides stochastically whether to accept the answer of the majority or
verify the answers received, at some cost. Incentives and/or penalties are
applied to workers accordingly. Under the above framework, we study the
conditions in which the master can reliably obtain tasks results, exploiting
that the repeated games model captures the effect of long-term interaction.
That is, workers take into account that their behavior in one computation will
have an effect on the behavior of other workers in the future. Indeed, should a
worker be found to deviate from some agreed strategic choice, the remaining
workers would change their own strategy to penalize the deviator. Hence, being
rational, workers do not deviate. We identify analytically the parameter
conditions to induce a desired worker behavior, and we evaluate experi-
mentally the mechanisms derived from such conditions. We also compare the
performance of our mechanisms with a previously known multi-round mechanism
based on reinforcement learning.Comment: 21 pages, 3 figure
Byzantine Attack and Defense in Cognitive Radio Networks: A Survey
The Byzantine attack in cooperative spectrum sensing (CSS), also known as the
spectrum sensing data falsification (SSDF) attack in the literature, is one of
the key adversaries to the success of cognitive radio networks (CRNs). In the
past couple of years, the research on the Byzantine attack and defense
strategies has gained worldwide increasing attention. In this paper, we provide
a comprehensive survey and tutorial on the recent advances in the Byzantine
attack and defense for CSS in CRNs. Specifically, we first briefly present the
preliminaries of CSS for general readers, including signal detection
techniques, hypothesis testing, and data fusion. Second, we analyze the spear
and shield relation between Byzantine attack and defense from three aspects:
the vulnerability of CSS to attack, the obstacles in CSS to defense, and the
games between attack and defense. Then, we propose a taxonomy of the existing
Byzantine attack behaviors and elaborate on the corresponding attack
parameters, which determine where, who, how, and when to launch attacks. Next,
from the perspectives of homogeneous or heterogeneous scenarios, we classify
the existing defense algorithms, and provide an in-depth tutorial on the
state-of-the-art Byzantine defense schemes, commonly known as robust or secure
CSS in the literature. Furthermore, we highlight the unsolved research
challenges and depict the future research directions.Comment: Accepted by IEEE Communications Surveys and Tutoiral
Achieving reliability and fairness in online task computing environments
MenciĂłn Internacional en el tĂtulo de doctorWe consider online task computing environments such as volunteer computing platforms running
on BOINC (e.g., SETI@home) and crowdsourcing platforms such as Amazon Mechanical
Turk. We model the computations as an Internet-based task computing system under the masterworker
paradigm. A master entity sends tasks across the Internet, to worker entities willing to
perform a computational task. Workers execute the tasks, and report back the results, completing
the computational round. Unfortunately, workers are untrustworthy and might report an incorrect
result. Thus, the first research question we answer in this work is how to design a reliable masterworker
task computing system. We capture the workers’ behavior through two realistic models:
(1) the “error probability model” which assumes the presence of altruistic workers willing to
provide correct results and the presence of troll workers aiming at providing random incorrect
results. Both types of workers suffer from an error probability altering their intended response.
(2) The “rationality model” which assumes the presence of altruistic workers, always reporting
a correct result, the presence of malicious workers always reporting an incorrect result, and the
presence of rational workers following a strategy that will maximize their utility (benefit). The
rational workers can choose among two strategies: either be honest and report a correct result,
or cheat and report an incorrect result. Our two modeling assumptions on the workers’ behavior
are supported by an experimental evaluation we have performed on Amazon Mechanical Turk.
Given the error probability model, we evaluate two reliability techniques: (1) “voting” and (2)
“auditing” in terms of task assignments required and time invested for computing correctly a set
of tasks with high probability. Considering the rationality model, we take an evolutionary game
theoretic approach and we design mechanisms that eventually achieve a reliable computational
platform where the master receives the correct task result with probability one and with minimal
auditing cost. The designed mechanisms provide incentives to the rational workers, reinforcing
their strategy to a correct behavior, while they are complemented by four reputation schemes that
cope with malice. Finally, we also design a mechanism that deals with unresponsive workers by
keeping a reputation related to the workers’ response rate. The designed mechanism selects the
most reliable and active workers in each computational round. Simulations, among other, depict
the trade-off between the master’s cost and the time the system needs to reach a state where
the master always receives the correct task result. The second research question we answer in
this work concerns the fair and efficient distribution of workers among the masters over multiple computational rounds. Masters with similar tasks are competing for the same set of workers at
each computational round. Workers must be assigned to the masters in a fair manner; when the
master values a worker’s contribution the most. We consider that a master might have a strategic
behavior, declaring a dishonest valuation on a worker in each round, in an attempt to increase its
benefit. This strategic behavior from the side of the masters might lead to unfair and inefficient assignments
of workers. Applying renown auction mechanisms to solve the problem at hand can be
infeasible since monetary payments are required on the side of the masters. Hence, we present an
alternative mechanism for fair and efficient distribution of the workers in the presence of strategic
masters, without the use of monetary incentives. We show analytically that our designed mechanism
guarantees fairness, is socially efficient, and is truthful. Simulations favourably compare
our designed mechanism with two benchmark auction mechanisms.This work has been supported by IMDEA Networks Institute and the Spanish Ministry of Education grant FPU2013-03792.Programa Oficial de Doctorado en IngenierĂa MatemáticaPresidente: Alberto Tarable.- Secretario: JosĂ© Antonio Cuesta Ruiz.- Vocal: Juan Julián Merelo GuervĂł
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
Consensus Algorithms of Distributed Ledger Technology -- A Comprehensive Analysis
The most essential component of every Distributed Ledger Technology (DLT) is
the Consensus Algorithm (CA), which enables users to reach a consensus in a
decentralized and distributed manner. Numerous CA exist, but their viability
for particular applications varies, making their trade-offs a crucial factor to
consider when implementing DLT in a specific field. This article provided a
comprehensive analysis of the various consensus algorithms used in distributed
ledger technologies (DLT) and blockchain networks. We cover an extensive array
of thirty consensus algorithms. Eleven attributes including hardware
requirements, pre-trust level, tolerance level, and more, were used to generate
a series of comparison tables evaluating these consensus algorithms. In
addition, we discuss DLT classifications, the categories of certain consensus
algorithms, and provide examples of authentication-focused and
data-storage-focused DLTs. In addition, we analyze the pros and cons of
particular consensus algorithms, such as Nominated Proof of Stake (NPoS),
Bonded Proof of Stake (BPoS), and Avalanche. In conclusion, we discuss the
applicability of these consensus algorithms to various Cyber Physical System
(CPS) use cases, including supply chain management, intelligent transportation
systems, and smart healthcare.Comment: 50 pages, 20 figure
- …