1,567 research outputs found

    Achieving reliability and fairness in online task computing environments

    Get PDF
    Mención Internacional en el título de doctorWe consider online task computing environments such as volunteer computing platforms running on BOINC (e.g., SETI@home) and crowdsourcing platforms such as Amazon Mechanical Turk. We model the computations as an Internet-based task computing system under the masterworker paradigm. A master entity sends tasks across the Internet, to worker entities willing to perform a computational task. Workers execute the tasks, and report back the results, completing the computational round. Unfortunately, workers are untrustworthy and might report an incorrect result. Thus, the first research question we answer in this work is how to design a reliable masterworker task computing system. We capture the workers’ behavior through two realistic models: (1) the “error probability model” which assumes the presence of altruistic workers willing to provide correct results and the presence of troll workers aiming at providing random incorrect results. Both types of workers suffer from an error probability altering their intended response. (2) The “rationality model” which assumes the presence of altruistic workers, always reporting a correct result, the presence of malicious workers always reporting an incorrect result, and the presence of rational workers following a strategy that will maximize their utility (benefit). The rational workers can choose among two strategies: either be honest and report a correct result, or cheat and report an incorrect result. Our two modeling assumptions on the workers’ behavior are supported by an experimental evaluation we have performed on Amazon Mechanical Turk. Given the error probability model, we evaluate two reliability techniques: (1) “voting” and (2) “auditing” in terms of task assignments required and time invested for computing correctly a set of tasks with high probability. Considering the rationality model, we take an evolutionary game theoretic approach and we design mechanisms that eventually achieve a reliable computational platform where the master receives the correct task result with probability one and with minimal auditing cost. The designed mechanisms provide incentives to the rational workers, reinforcing their strategy to a correct behavior, while they are complemented by four reputation schemes that cope with malice. Finally, we also design a mechanism that deals with unresponsive workers by keeping a reputation related to the workers’ response rate. The designed mechanism selects the most reliable and active workers in each computational round. Simulations, among other, depict the trade-off between the master’s cost and the time the system needs to reach a state where the master always receives the correct task result. The second research question we answer in this work concerns the fair and efficient distribution of workers among the masters over multiple computational rounds. Masters with similar tasks are competing for the same set of workers at each computational round. Workers must be assigned to the masters in a fair manner; when the master values a worker’s contribution the most. We consider that a master might have a strategic behavior, declaring a dishonest valuation on a worker in each round, in an attempt to increase its benefit. This strategic behavior from the side of the masters might lead to unfair and inefficient assignments of workers. Applying renown auction mechanisms to solve the problem at hand can be infeasible since monetary payments are required on the side of the masters. Hence, we present an alternative mechanism for fair and efficient distribution of the workers in the presence of strategic masters, without the use of monetary incentives. We show analytically that our designed mechanism guarantees fairness, is socially efficient, and is truthful. Simulations favourably compare our designed mechanism with two benchmark auction mechanisms.This work has been supported by IMDEA Networks Institute and the Spanish Ministry of Education grant FPU2013-03792.Programa Oficial de Doctorado en Ingeniería MatemáticaPresidente: Alberto Tarable.- Secretario: José Antonio Cuesta Ruiz.- Vocal: Juan Julián Merelo Guervó

    Crowdsourcing atop blockchains

    Get PDF
    Traditional crowdsourcing systems, such as Amazon\u27s Mechanical Turk (MTurk), though once acquiring great economic successes, have to fully rely on third-party platforms to serve between the requesters and the workers for basic utilities. These third-parties have to be fully trusted to assist payments, resolve disputes, protect data privacy, manage user authentications, maintain service online, etc. Nevertheless, tremendous real-world incidents indicate how elusive it is to completely trust these platforms in reality, and the reduction of such over-reliance becomes desirable. In contrast to the arguably vulnerable centralized approaches, a public blockchain is a distributed and transparent global consensus computer that is highly robust. The blockchain is usually managed and replicated by a large-scale peer-to-peer network collectively, thus being much more robust to be fully trusted for correctness and availability. It, therefore, becomes enticing to build novel crowdsourcing applications atop blockchains to reduce the over-trust on third-party platforms. However, this new fascinating technology also brings about new challenges, which were never that severe in the conventional centralized setting. The most serious issue is that the blockchain is usually maintained in the public Internet environment with a broader attack surface open to anyone. This not only causes serious privacy and security issues, but also allows the adversaries to exploit the attack surface to hamper more basic utilities. Worse still, most existing blockchains support only light on-chain computations, and the smart contract executed atop the decentralized consensus computer must be simple, which incurs serious feasibility problems. In reality, the privacy/security issue and the feasibility problem even restrain each other and create serious tensions to hinder the broader adoption of blockchain. The dissertation goes through the non-trivial challenges to realize secure yet still practical decentralization (for urgent crowdsourcing use-cases), and lay down the foundation for this line of research. In sum, it makes the next major contributions. First, it identifies the needed security requirements in decentralized knowledge crowdsourcing (e.g., data privacy), and initiates the research of private decentralized crowdsourcing. In particular, the confidentiality of solicited data is indispensable to prevent free-riders from pirating the others\u27 submissions, thus ensuring the quality of solicited knowledge. To this end, a generic private decentralized crowdsourcing framework is dedicatedly designed, analyzed, and implemented. Furthermore, this dissertation leverages concretely efficient cryptographic design to reduce the cost of the above generic framework. It focuses on decentralizing the special use-case of Amazon MTurk, and conducts multiple specific-purpose optimizations to remove needless generality to squeeze performance. The implementation atop Ethereum demonstrates a handling cost even lower than MTurk. In addition, it focuses on decentralized crowdsourcing of computing power for specific machine learning tasks. It lets a requester place deposits in the blockchain to recruit some workers for a designated (randomized) programs. If and only if these workers contribute their resources to compute correctly, they would earn well-deserved payments. For these goals, a simple yet still useful incentive mechanism is developed atop the blockchain to deter rational workers from cheating. Finally, the research initiates the first systematic study on crowdsourcing blockchains\u27 full nodes to assist superlight clients (e.g., mobile phones and IoT devices) to read the blockchain\u27s records. This dissertation presents a novel generic solution through the powerful lens of game-theoretic treatments, which solves the long-standing open problem of designing generic superlight clients for all blockchains

    SGIRP: A Secure and Greedy Intersection-Based Routing Protocol for VANET using Guarding Nodes

    Get PDF
    Vehicular Ad Hoc Network (VANET) is an advance wireless technology in the field of wireless communication to provide better Intelligent Transportation Services (ITS). It is an emerging area of research in the field of vehicular technology for its high mobility and high link disruption. VANET provides better road services to the end users by providing safety to the passengers and drivers. Multimedia sharing, e-shopping, safety systems, etc. are some of ITS services provided by VANET. VANETs are strongly affected by link disruption problem for their high mobility and randomness. Security is also a main issue in VANET nowadays, which degrades the network performance. In this thesis, we present a Secure and Greedy Intersection-Based Routing Protocol (SGIRP) to transmit the data securely from source (S) to the destination (D) in a shortest path. For this, we have set Guarding Nodes (GNs) at every intersection to relay the packet from one intersection to other in a secure manner. GN helps in calculating the updated shortest paths to D, protects the network from malicious attacks by using authentication scheme and also recovers the network from Communication Voids (CV). GN plays an important role in transmitting the data from S to D in a fast and secure way. At last, we evaluate our proposed SGIRP protocol by deriving and proving the lemmas related to the protocol. It is also proved that SGIRP protocol shows better performance than Gytar protocol in terms of shorter time delay (T)

    Election Security Is Harder Than You Think

    Full text link
    Recent years have seen the rise of nation-state interference in elections across the globe, making the ever-present need for more secure elections all the more dire. While certain common-sense approaches have been a typical response in the past, e.g. ``don't connect voting machines to the Internet'' and ``use a voting system with a paper trail'', known-good solutions to improving election security have languished in relative obscurity for decades. These techniques are only now finally being implemented at scale, and that implementation has brought the intricacies of sophisticated approaches to election security into full relief. This dissertation argues that while approaches to improve election security like paper ballots and post-election audits seem straightforward, in reality there are significant practical barriers to sufficient implementation. Overcoming these barriers is a necessary condition for an election to be secure, and while doing so is possible, it requires significant refinement of existing techniques. In order to better understand how election security technology can be improved, I first develop what it means for an election to be secure. I then delve into experimental results regarding voter-verified paper, discussing the challenges presented by paper ballots as well as some strategies to improve the security they can deliver. I examine the post-election audit ecosystem and propose a manifest improvement to audit workload analysis through parallelization. Finally, I show that even when all of these conditions are met (as in a vote-by-mail scenario), there are still wrinkles that must be addressed for an election to be truly secure.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163272/1/matber_1.pd

    Security Evaluation of Practical Quantum Communication Systems

    Get PDF
    Modern information and communication technology (ICT), including internet, smart phones, cloud computing, global positioning system, e-commerce, e-Health, global communications and internet of things (IoT), all rely fundamentally - for identification, authentication, confidentiality and confidence - on cryptography. However, there is a high chance that most modern cryptography protocols will be annihilated upon the arrival of quantum computers. This necessitates taking steps for making the current ICT systems secure against quantum computers. The task is a huge and time-consuming task and there is a serious probability that quantum computers will arrive before it is complete. Hence, it is of utmost importance to understand the risk and start planning for the solution now. At this moment, there are two potential paths that lead to solution. One is the path of post-quantum cryptography: inventing classical cryptographic algorithms that are secure against quantum attacks. Although they are hoped to provide security against quantum attacks for most situations in practice, there is no mathematical proof to guarantee unconditional security (`unconditional security' is a technical term that means security is not dependent on a computational hardness assumption). This has driven many to choose the second path: quantum cryptography (QC). Quantum cryptography - utilizing the power of quantum mechanics - can guarantee unconditional security in theory. However, in practice, device behavior varies from the modeled behavior, leading to side-channels that can be exploited by an adversary to compromise security. Thus, practical QC systems need to be security evaluated - i.e., scrutinized and tested for possible vulnerabilities - before they are sold to customers or deployed in large scale. Unfortunately, this task has become more and more demanding as QC systems are being built in various style, variants and forms at different parts of the globe. Hence, standardization and certification of security evaluation methods are necessary. Also, a number of compatibility, connectivity and interoperability issues among the QC systems require standardization and certification which makes it an issue of highest priority. In this thesis, several areas of practical quantum communication systems were scrutinized and tested for the purpose of standardization and certification. At the source side, the calibration mechanism of the outgoing mean photon number - a critical parameter for security - was investigated. As a prototype, the pulse-energy-monitoring system (PEMS) implemented in a commercial quantum key distribution (QKD) machine was chosen and the design validity was tested. It was found that the security of PEMS was based on flawed design logic and conservative assumptions on Eve's ability. Our results pointed out the limitations of closed security standards developed inside a company and highlighted the need for developing - for security - open standards and testing methodologies in collaboration between research and industry. As my second project, I evaluated the security of the free space QKD receiver prototype designed for long-distance satellite communication. The existence of spatial-mode-efficiency-mismatch side-channel was experimentally verified and the attack feasibility was tested. The work identified a methodology for checking the spatial-mode-detector-efficiency mismatch in these types of receivers and showed a simple, implementable countermeasure to block this side-channel. Next, the feasibility of laser damage as a potential tool for eavesdropping was investigated. After testing on two different quantum communication systems, it was confirmed that laser damage has a high chance of compromising the security of a QC system. This work showed that a characterized and side-channel free system does not always mean secure; as side-channels can be created on demand. The result pointed out that the standardization and certification process must consider laser-damage related security critical issues and ensure that it is prevented. Finally, the security proof assumptions of the detector-device-independent QKD (ddiQKD) protocol - that restricted the ability of an eavesdropper - was scrutinized. By introducing several eavesdropping schemes, we showed that ddiQKD security cannot be based on post selected entanglement. Our results pointed out that testing the validity of assumptions are equally important as testing hardware for the standardization and certification process. Several other projects were undertaken including security evaluation of a QKD system against long wavelength Trojan-horse attack, certifying a countermeasure against a particular attack, analyzing the effects of finite-key-size and imperfect state preparation in a commercial QKD system, and experimental demonstration of quantum fingerprinting. All of these works are parts of an iterative process for standardization and certification that a new technology - in this case, quantum cryptography- must go through before being able to supersede the old technology - classical cryptography. I expect that after few more iterations like the ones outlined in this thesis, security of practical QC will advance to a state to be called unconditional and the technology will truly be able to win the trust to be deployed on large scale

    Robust and cheating-resilient power auctioning on Resource Constrained Smart Micro-Grids

    Get PDF
    The principle of Continuous Double Auctioning (CDA) is known to provide an efficient way of matching supply and demand among distributed selfish participants with limited information. However, the literature indicates that the classic CDA algorithms developed for grid-like applications are centralised and insensitive to the processing resources capacity, which poses a hindrance for their application on resource constrained, smart micro-grids (RCSMG). A RCSMG loosely describes a micro-grid with distributed generators and demand controlled by selfish participants with limited information, power storage capacity and low literacy, communicate over an unreliable infrastructure burdened by limited bandwidth and low computational power of devices. In this thesis, we design and evaluate a CDA algorithm for power allocation in a RCSMG. Specifically, we offer the following contributions towards power auctioning on RCSMGs. First, we extend the original CDA scheme to enable decentralised auctioning. We do this by integrating a token-based, mutual-exclusion (MUTEX) distributive primitive, that ensures the CDA operates at a reasonably efficient time and message complexity of O(N) and O(logN) respectively, per critical section invocation (auction market execution). Our CDA algorithm scales better and avoids the single point of failure problem associated with centralised CDAs (which could be used to adversarially provoke a break-down of the grid marketing mechanism). In addition, the decentralised approach in our algorithm can help eliminate privacy and security concerns associated with centralised CDAs. Second, to handle CDA performance issues due to malfunctioning devices on an unreliable network (such as a lossy network), we extend our proposed CDA scheme to ensure robustness to failure. Using node redundancy, we modify the MUTEX protocol supporting our CDA algorithm to handle fail-stop and some Byzantine type faults of sites. This yields a time complexity of O(N), where N is number of cluster-head nodes; and message complexity of O((logN)+W) time, where W is the number of check-pointing messages. These results indicate that it is possible to add fault tolerance to a decentralised CDA, which guarantees continued participation in the auction while retaining reasonable performance overheads. In addition, we propose a decentralised consumption scheduling scheme that complements the auctioning scheme in guaranteeing successful power allocation within the RCSMG. Third, since grid participants are self-interested we must consider the issue of power theft that is provoked when participants cheat. We propose threat models centred on cheating attacks aimed at foiling the extended CDA scheme. More specifically, we focus on the Victim Strategy Downgrade; Collusion by Dynamic Strategy Change, Profiling with Market Prediction; and Strategy Manipulation cheating attacks, which are carried out by internal adversaries (auction participants). Internal adversaries are participants who want to get more benefits but have no interest in provoking a breakdown of the grid. However, their behaviour is dangerous because it could result in a breakdown of the grid. Fourth, to mitigate these cheating attacks, we propose an exception handling (EH) scheme, where sentinel agents use allocative efficiency and message overheads to detect and mitigate cheating forms. Sentinel agents are tasked to monitor trading agents to detect cheating and reprimand the misbehaving participant. Overall, message complexity expected in light demand is O(nLogN). The detection and resolution algorithm is expected to run in linear time complexity O(M). Overall, the main aim of our study is achieved by designing a resilient and cheating-free CDA algorithm that is scalable and performs well on resource constrained micro-grids. With the growing popularity of the CDA and its resource allocation applications, specifically to low resourced micro-grids, this thesis highlights further avenues for future research. First, we intend to extend the decentralised CDA algorithm to allow for participants’ mobile phones to connect (reconnect) at different shared smart meters. Such mobility should guarantee the desired CDA properties, the reliability and adequate security. Secondly, we seek to develop a simulation of the decentralised CDA based on the formal proofs presented in this thesis. Such a simulation platform can be used for future studies that involve decentralised CDAs. Third, we seek to find an optimal and efficient way in which the decentralised CDA and the scheduling algorithm can be integrated and deployed in a low resourced, smart micro-grid. Such an integration is important for system developers interested in exploiting the benefits of the two schemes while maintaining system efficiency. Forth, we aim to improve on the cheating detection and mitigation mechanism by developing an intrusion tolerance protocol. Such a scheme will allow continued auctioning in the presence of cheating attacks while incurring low performance overheads for applicability in a RCSMG
    corecore