140 research outputs found

    Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin

    Full text link
    In the Bitcoin system, participants are rewarded for solving cryptographic puzzles. In order to receive more consistent rewards over time, some participants organize mining pools and split the rewards from the pool in proportion to each participant's contribution. However, several attacks threaten the ability to participate in pools. The block withholding (BWH) attack makes the pool reward system unfair by letting malicious participants receive unearned wages while only pretending to contribute work. When two pools launch BWH attacks against each other, they encounter the miner's dilemma: in a Nash equilibrium, the revenue of both pools is diminished. In another attack called selfish mining, an attacker can unfairly earn extra rewards by deliberately generating forks. In this paper, we propose a novel attack called a fork after withholding (FAW) attack. FAW is not just another attack. The reward for an FAW attacker is always equal to or greater than that for a BWH attacker, and it is usable up to four times more often per pool than in BWH attack. When considering multiple pools - the current state of the Bitcoin network - the extra reward for an FAW attack is about 56% more than that for a BWH attack. Furthermore, when two pools execute FAW attacks on each other, the miner's dilemma may not hold: under certain circumstances, the larger pool can consistently win. More importantly, an FAW attack, while using intentional forks, does not suffer from practicality issues, unlike selfish mining. We also discuss partial countermeasures against the FAW attack, but finding a cheap and efficient countermeasure remains an open problem. As a result, we expect to see FAW attacks among mining pools.Comment: This paper is an extended version of a paper accepted to ACM CCS 201

    Revisiting Binary Code Similarity Analysis using Interpretable Feature Engineering and Lessons Learned

    Full text link
    Binary code similarity analysis (BCSA) is widely used for diverse security applications such as plagiarism detection, software license violation detection, and vulnerability discovery. Despite the surging research interest in BCSA, it is significantly challenging to perform new research in this field for several reasons. First, most existing approaches focus only on the end results, namely, increasing the success rate of BCSA, by adopting uninterpretable machine learning. Moreover, they utilize their own benchmark sharing neither the source code nor the entire dataset. Finally, researchers often use different terminologies or even use the same technique without citing the previous literature properly, which makes it difficult to reproduce or extend previous work. To address these problems, we take a step back from the mainstream and contemplate fundamental research questions for BCSA. Why does a certain technique or a feature show better results than the others? Specifically, we conduct the first systematic study on the basic features used in BCSA by leveraging interpretable feature engineering on a large-scale benchmark. Our study reveals various useful insights on BCSA. For example, we show that a simple interpretable model with a few basic features can achieve a comparable result to that of recent deep learning-based approaches. Furthermore, we show that the way we compile binaries or the correctness of underlying binary analysis tools can significantly affect the performance of BCSA. Lastly, we make all our source code and benchmark public and suggest future directions in this field to help further research.Comment: 22 pages, under revision to Transactions on Software Engineering (July 2021

    SocialCloud: Using Social Networks for Building Distributed Computing Services

    Full text link
    In this paper we investigate a new computing paradigm, called SocialCloud, in which computing nodes are governed by social ties driven from a bootstrapping trust-possessing social graph. We investigate how this paradigm differs from existing computing paradigms, such as grid computing and the conventional cloud computing paradigms. We show that incentives to adopt this paradigm are intuitive and natural, and security and trust guarantees provided by it are solid. We propose metrics for measuring the utility and advantage of this computing paradigm, and using real-world social graphs and structures of social traces; we investigate the potential of this paradigm for ordinary users. We study several design options and trade-offs, such as scheduling algorithms, centralization, and straggler handling, and show how they affect the utility of the paradigm. Interestingly, we conclude that whereas graphs known in the literature for high trust properties do not serve distributed trusted computing algorithms, such as Sybil defenses---for their weak algorithmic properties, such graphs are good candidates for our paradigm for their self-load-balancing features.Comment: 15 pages, 8 figures, 2 table

    FastCPA: Efficient Correlation Power Analysis Computation with a Large Number of Traces

    Get PDF
    International audienceCryptographic algorithm implementations need to be secured against side-channel attacks. Correlation Power Analysis (CPA) is an efficient technique for recovering secret key bytes of a cryptographic algorithm implementation by analyzing the power traces of its execution. Although CPA usually does not require a lot of traces to recover secret key bytes, it is no longer true in a noisy environment , for which the required number of traces can be very high. Computation time can then become a major concern for performing this attack and assessing the robustness of an implementation against it. This article introduces FastCPA, which is a correlation computation targeting the same goal as regular CPA, but based on power consumption vectors indexed by plaintext values. The main advantage of FastCPA is its fast execution time compared to the regular CPA computation, especially when the number of traces is high: for 100,000 traces, the speedup factor varies from 70 to almost 200 depending on the number of samples. An analysis of FastCPA accuracy is made, based on the number of correct key bytes found with an increasing noise. This analysis shows that FastCPA performs similarly as the regular CPA for a high number of traces. The minimum required number of traces to get the correct key guess is also computed for 100,000 noisy traces and shows that FastCPA obtains similar results to those of regular CPA. Finally, although FastCPA is more sensitive to plaintext values than the regular CPA, it is shown that this aspect can be neglected for a high number of traces

    First Experimental Result of Power Analysis Attacks on a FPGA Implementation of LEA

    Get PDF
    The lightweight encryption algorithm (LEA) is a 128-bit block cipher introduced in 2013. It is based on Addition, rotation, XOR operations for 32-bit words. Because of its structure,it is useful for several devices to achieve a high speed of encryption and low-power consumption.However, side-channel attacks on LEA implementations have not been examined.In this study, we perform a power analysis attack on LEA. We implemented LEA with 128-bit key size on FPGA in a straightforward manner. Our experimental results show that we can successfully retrieve a 128-bit master key by attacking a first round encryption
    corecore