7,387 research outputs found

    On Non-Parallelizable Deterministic Client Puzzle Scheme with Batch Verification Modes

    Get PDF
    A (computational) client puzzle scheme enables a client to prove to a server that a certain amount of computing resources (CPU cycles and/or Memory look-ups) has been dedicated to solve a puzzle. Researchers have identified a number of potential applications, such as constructing timed cryptography, fighting junk emails, and protecting critical infrastructure from DoS attacks. In this paper, we first revisit this concept and formally define two properties, namely deterministic computation and parallel computation resistance. Our analysis show that both properties are crucial for the effectiveness of client puzzle schemes in most application scenarios. We prove that the RSW client puzzle scheme, which is based on the repeated squaring technique, achieves both properties. Secondly, we introduce two batch verification modes for the RSW client puzzle scheme in order to improve the verification efficiency of the server, and investigate three methods for handling errors in batch verifications. Lastly, we show that client puzzle schemes can be integrated with reputation systems to further improve the effectiveness in practice

    Using Spammers\u27 Computing Resources for Volunteer Computing

    Get PDF
    Spammers are continually looking to circumvent counter-measures seeking to slow them down. An immense amount of time and money is currently devoted to hiding spam, but not enough is devoted to effectively preventing it. One approach for preventing spam is to force the spammer\u27s machine to solve a computational problem of varying difficulty before granting access. The idea is that suspicious or problematic requests are given difficult problems to solve while legitimate requests are allowed through with minimal computation. Unfortunately, most systems that employ this model waste the computing resources being used, as they are directed towards solving cryptographic problems that provide no societal benefit. While systems such as reCAPTCHA and FoldIt have allowed users to contribute solutions to useful problems interactively, an analogous solution for non-interactive proof-of-work does not exist. Towards this end, this paper describes MetaCAPTCHA and reBOINC, an infrastructure for supporting useful proof-of-work that is integrated into a web spam throttling service. The infrastructure dynamically issues CAPTCHAs and proof-of-work puzzles while ensuring that malicious users solve challenging puzzles. Additionally, it provides a framework that enables the computational resources of spammers to be redirected towards meaningful research. To validate the efficacy of our approach, prototype implementations based on OpenCV and BOINC are described that demonstrate the ability to harvest spammer\u27s resources for beneficial purposes

    Kadupul: Livin' on the edge with virtual currencies and time-locked puzzles

    Get PDF
    Devices connected to the Internet today have a wide range of local communication channels available, such as wireless Wifi, Bluetooth or NFC, as well as wired backhaul. In densely populated areas it is possible to create heterogeneous, multihop communication paths using a combination of these technologies, and often transmit data with lower latency than via a wired Internet connection. However, the potential for sharing meshed wireless radios in this way has never been realised due to the lack of economic incentives to do so on the part of individual nodes. In this paper, we explore how virtual currencies such as Bitcoin might be used to provide an end-to-end incentive scheme to convince forwarding nodes that it is profitable to send packets on via the lowest latency mechanism available. Clients inject a small amount of money to transmit a datagram, and forwarding engines compete to solve a time-locked puzzle that can be claimed by the node that delivers the result in the lowest latency. This approach naturally extends congestion control techniques to a surge pricing model when available bandwidth is low. We conclude by discussing several latency-sensitive applications that would benefit for this, such as video streaming and local augmented reality systems.The research leading to these results has received funding from the European Union's Seventh Framework Programme FP7/2007-2013 under Trilogy 2 project, grant agreement no 317756.This is the author accepted manuscript. The final version is available from ACM via http://dx.doi.org/10.1145/2753488.275349

    Mitigating Distributed Denial of Service Attacks in an Anonymous Routing Environment: Client Puzzles and Tor

    Get PDF
    Online intelligence operations use the Internet to gather information on the activities of U.S. adversaries. The security of these operations is paramount, and one way to avoid being linked to the Department of Defense (DoD) is to use anonymous communication systems. One such system, Tor, makes interactive TCP services anonymous. Tor uses the Transport Layer Security (TLS) protocol and is thus vulnerable to a distributed denial-of-service (DDoS) attack that can significantly delay data traversing the Tor network. This research uses client puzzles to mitigate TLS DDoS attacks. A novel puzzle protocol, the Memoryless Puzzle Protocol (MPP), is conceived, implemented, and analyzed for anonymity and DDoS vulnerabilities. Consequently, four new secondary DDoS and anonymity attacks are identified and defenses are proposed. Furthermore, analysis of the MPP identified and resolved two important shortcomings of the generalized client puzzle technique. Attacks that normally induce victim CPU utilization rates of 80-100% are reduced to below 70%. Also, the puzzle implementation allows for user-data latency to be reduced by close to 50% during a large-scale attack .Finally, experimental results show successful mitigation can occur without sending a puzzle to every requesting client. By adjusting the maximum puzzle strength, CPU utilization can be capped at 70% even when an arbitrary client has only a 30% chance of receiving a puzzle

    Ease-of-Use of Tactile Interaction for Novice Older Adults

    Get PDF
    International audienceUsability, particularly ease-of-use, is a main factor affecting the acceptance of technologies by older adults. Mobile devices offer great possibilities for well-being applications, but they are often equipped with touchscreen. In order to evaluate the ease-of-use of tactile interaction, this study compares the performances of 16 novice (mean age 74) and 8 experienced older adults (mean 75) during the execution of drag-and-drop interaction for achieving tactile puzzle games on smartphone and tablet, with pen and fingers. Results show that novice users were able accomplish interaction accurately with longer times but no significant difference of errors of accuracy

    Big Data Application and System Co-optimization in Cloud and HPC Environment

    Get PDF
    The emergence of big data requires powerful computational resources and memory subsystems that can be scaled efficiently to accommodate its demands. Cloud is a new well-established computing paradigm that can offer customized computing and memory resources to meet the scalable demands of big data applications. In addition, the flexible pay-as-you-go pricing model offers opportunities for using large scale of resources with low cost and no infrastructure maintenance burdens. High performance computing (HPC) on the other hand also has powerful infrastructure that has potential to support big data applications. In this dissertation, we explore the application and system co-optimization opportunities to support big data in both cloud and HPC environments. Specifically, we explore the unique features of both application and system to seek overlooked optimization opportunities or tackle challenges that are difficult to be addressed by only looking at the application or system individually. Based on the characteristics of the workloads and their underlying systems to derive the optimized deployment and runtime schemes, we divide the workflow into four categories: 1) memory intensive applications; 2) compute intensive applications; 3) both memory and compute intensive applications; 4) I/O intensive applications.When deploying memory intensive big data applications to the public clouds, one important yet challenging problem is selecting a specific instance type whose memory capacity is large enough to prevent out-of-memory errors while the cost is minimized without violating performance requirements. In this dissertation, we propose two techniques for efficient deployment of big data applications with dynamic and intensive memory footprint in the cloud. The first approach builds a performance-cost model that can accurately predict how, and by how much, virtual memory size would slow down the application and consequently, impact the overall monetary cost. The second approach employs a lightweight memory usage prediction methodology based on dynamic meta-models adjusted by the application's own traits. The key idea is to eliminate the periodical checkpointing and migrate the application only when the predicted memory usage exceeds the physical allocation. When applying compute intensive applications to the clouds, it is critical to make the applications scalable so that it can benefit from the massive cloud resources. In this dissertation, we first use the Kirchhoff law, which is one of the most widely used physical laws in many engineering principles, as an example workload for our study. The key challenge of applying the Kirchhoff law to real-world applications at scale lies in the high, if not prohibitive, computational cost to solve a large number of nonlinear equations. In this dissertation, we propose a high-performance deep-learning-based approach for Kirchhoff analysis, namely HDK. HDK employs two techniques to improve the performance: (i) early pruning of unqualified input candidates which simplify the equation and select a meaningful input data range; (ii) parallelization of forward labelling which execute steps of the problem in parallel. When it comes to both memory and compute intensive applications in clouds, we use blockchain system as a benchmark. Existing blockchain frameworks exhibit a technical barrier for many users to modify or test out new research ideas in blockchains. To make it worse, many advantages of blockchain systems can be demonstrated only at large scales, which are not always available to researchers. In this dissertation, we develop an accurate and efficient emulating system to replay the execution of large-scale blockchain systems on tens of thousands of nodes in the cloud. For I/O intensive applications, we observe one important yet often neglected side effect of lossy scientific data compression. Lossy compression techniques have demonstrated promising results in significantly reducing the scientific data size while guaranteeing the compression error bounds, but the compressed data size is often highly skewed and thus impact the performance of parallel I/O. Therefore, we believe it is critical to pay more attention to the unbalanced parallel I/O caused by lossy scientific data compression
    • 

    corecore