52 research outputs found
New Methods of Uncheatable Grid Computing
Grid computing is the collection of computer resources from multiple locations to reach a common goal. According to the task publisher's computing power, we will classify the deceptive detection schemes into two categories, and then analyze the security of deceptive detection schemes based on the characteristics of computational task function. On the basis of double check, we proposed an improved scheme at the cost of time sacrifice called the secondary allocation scheme of double check. In our scheme, the security of double check has been greatly strengthened. Finally, we analyzed the common problem of High-Value Rare Events, improved the deceptive detection scheme due to [1], and then put forward a new deceptive detection scheme with better security and efficiency. This paper is revised and expanded version of a paper entitled "Deceptive Detection and Security Reinforcement in Grid Computing" [2] presented at 2013 5th International Conference on Intelligent Networking and Collaborative Systems, Xi'an city, Shanxi province, China, September 9-11, 2013
Pipelined Algorithms to Detect Cheating in Long-Term Grid Computations
This paper studies pipelined algorithms for protecting distributed grid
computations from cheating participants, who wish to be rewarded for tasks they
receive but don't perform. We present improved cheater detection algorithms
that utilize natural delays that exist in long-term grid computations. In
particular, we partition the sequence of grid tasks into two interleaved
sequences of task rounds, and we show how to use those rounds to devise the
first general-purpose scheme that can catch all cheaters, even when cheaters
collude. The main idea of this algorithm might at first seem
counter-intuitive--we have the participants check each other's work. A naive
implementation of this approach would, of course, be susceptible to collusion
attacks, but we show that by, adapting efficient solutions to the parallel
processor diagnosis problem, we can tolerate collusions of lazy cheaters, even
if the number of such cheaters is a fraction of the total number of
participants. We also include a simple economic analysis of cheaters in grid
computations and a parameterization of the main deterrent that can be used
against them--the probability of being caught.Comment: Expanded version with an additional figure; ISSN 0304-397
Foundations, Properties, and Security Applications of Puzzles: A Survey
Cryptographic algorithms have been used not only to create robust ciphertexts
but also to generate cryptograms that, contrary to the classic goal of
cryptography, are meant to be broken. These cryptograms, generally called
puzzles, require the use of a certain amount of resources to be solved, hence
introducing a cost that is often regarded as a time delay---though it could
involve other metrics as well, such as bandwidth. These powerful features have
made puzzles the core of many security protocols, acquiring increasing
importance in the IT security landscape. The concept of a puzzle has
subsequently been extended to other types of schemes that do not use
cryptographic functions, such as CAPTCHAs, which are used to discriminate
humans from machines. Overall, puzzles have experienced a renewed interest with
the advent of Bitcoin, which uses a CPU-intensive puzzle as proof of work. In
this paper, we provide a comprehensive study of the most important puzzle
construction schemes available in the literature, categorizing them according
to several attributes, such as resource type, verification type, and
applications. We have redefined the term puzzle by collecting and integrating
the scattered notions used in different works, to cover all the existing
applications. Moreover, we provide an overview of the possible applications,
identifying key requirements and different design approaches. Finally, we
highlight the features and limitations of each approach, providing a useful
guide for the future development of new puzzle schemes.Comment: This article has been accepted for publication in ACM Computing
Survey
Secured Data Outsourcing in Cloud Computing
Cloud computing is a popular technology in the IT world. After internet, it is the biggest thing for IT world. Cloud computing uses the Internet for performing the task on the computer and it is the next- generation architecture of IT Industry. It is related to different technologies and the convergence of various technologies has emerged to be called as cloud computing. It places the application software and databases to the huge data centers, where the supervision of the data and services may not be fully trusted. This unique attribute poses many new security challenges which have not been well understood. In this paper, we develop system which allows customer to use cloud server with various profits and strong securities. So when customer stores his sensitive data on cloud server he should not worry about securities, we also protect customer’s account from malicious behaviors by verifying the result. This result verification mechanism is highly efficient for both cloud server and cloud customer. Covering security analysis and experiment results shows the immediate practicability of our mechanism design.
DOI: 10.17762/ijritcc2321-8169.150314
Studying Security Issues in HPC (Super Computer) Environment
HPC has evolved from being a buzzword to becoming one of the most exciting areas in the field of Information Technology & Computer Science. Organizations are increasingly looking to HPC to improve operational efficiency, reduce expenditure over time and improve the computational power. Using Super Computers hosted on a particular location and connected with the Internet can reduce the installation of computational power and making it centralise. However, centralise system has some advantages and disadvantages over the distributed system, but we avoid discussing those issues and focusing more on the HPC systems. HPC can also be used to build web and file server and for applications of cloud computing. Due to cluster type architecture and high processing speed, we have experienced that it works far better and handles the loads in much more efficient manner then series of desktop with normal configuration connected together for application of cloud computing and network applications. In this paper we have discussed on issues re lated to security of data and information on the context of HPC. Data and information are vanurable to security and safety. It is the purpose of this paper to present some practical security issues related to High Performance Computing Environment. Based on our observation on security requirements of HPC we have discuss some existing security technologies used in HPC. When observed to various literatures, we found that the existing techniques are not enough. We have discussed, some of the key issues relating to this context. Lastly, we have made an approach to find an appropriate solution using Blowfish encryption and decryption algorithm. We hope that, with our proposed concepts, HPC applications to perform better and in safer way. At the end, we have proposed a modified blow fish algorithmic technique by attaching random number generator algorithm to make the encryption decryption technique more appropriate for our own HPC environment
An (Almost) Constant-E ort Solution-Veri cation Proof-of-Work Protocol based on Merkle Trees
Cryptology eprint Archive 2007/433International audienceProof-of-work schemes are economic measures to deter denial-of-service attacks: service requesters compute moderately hard functions that are easy to check by the provider. We present such a new scheme for solution-verification protocols. Although most schemes to date are probabilistic unbounded iterative processes with high variance of the requester effort, our Merkle tree scheme is deterministic, with an almost constant effort and null variance, and is computation-optima
Multi-round Master-Worker Computing: a Repeated Game Approach
We consider a computing system where a master processor assigns tasks for
execution to worker processors through the Internet. We model the workers
decision of whether to comply (compute the task) or not (return a bogus result
to save the computation cost) as a mixed extension of a strategic game among
workers. That is, we assume that workers are rational in a game-theoretic
sense, and that they randomize their strategic choice. Workers are assigned
multiple tasks in subsequent rounds. We model the system as an infinitely
repeated game of the mixed extension of the strategic game. In each round, the
master decides stochastically whether to accept the answer of the majority or
verify the answers received, at some cost. Incentives and/or penalties are
applied to workers accordingly. Under the above framework, we study the
conditions in which the master can reliably obtain tasks results, exploiting
that the repeated games model captures the effect of long-term interaction.
That is, workers take into account that their behavior in one computation will
have an effect on the behavior of other workers in the future. Indeed, should a
worker be found to deviate from some agreed strategic choice, the remaining
workers would change their own strategy to penalize the deviator. Hence, being
rational, workers do not deviate. We identify analytically the parameter
conditions to induce a desired worker behavior, and we evaluate experi-
mentally the mechanisms derived from such conditions. We also compare the
performance of our mechanisms with a previously known multi-round mechanism
based on reinforcement learning.Comment: 21 pages, 3 figure
- …