Garbling Schemes and Applications

Abstract

The topic of this thesis is garbling schemes and their applications. A garbling scheme is a set of algorithms for realizing secure two-party computation. A party called a client possesses a private algorithm as well as a private input and would like to compute the algorithm with this input. However, the client might not have enough computational resources to evaluate the function with the input on his own. The client outsources the computation to another party, called an evaluator. Since the client wants to protect the algorithm and the input, he cannot just send the algorithm and the input to the evaluator. With a garbling scheme, the client can protect the privacy of the algorithm, the input and possibly also the privacy of the output.  The increase in network-based applications has arisen concerns about the privacy of user data. Therefore, privacy-preserving or privacy-enhancing techniques have gained interest in recent research. Garbling schemes seem to be an ideal solution for privacy-preserving applications. First of all, secure garbling schemes hide the algorithm and its input. Secondly, garbling schemes are known to have efficient implementations.  In this thesis, we propose two applications utilizing garbling schemes. The first application provides privacy-preserving electronic surveillance. The second application extends electronic surveillance to more versatile monitoring, including also health telemetry. This kind of application would be ideal for assisted living services.  In this work, we also present theoretical results related to garbling schemes. We present several new security definitions for garbling schemes which are of practical use. Traditionally, the same garbled algorithm can be evaluated once with garbled input. In applications, the same function is often evaluated several times with different inputs. Recently, a solution based on fully homomorphic encryption provides arbitrarily reusable garbling schemes. The disadvantage in this approach is that the arbitrary reuse cannot be efficiently implemented due to the inefficiency of fully homomorphic encryption.  We propose an alternative approach. Instead of arbitrary reusability, the same garbled algorithm could be used a limited number of times. This gives us a set of new security classes for garbling schemes. We prove several relations between new and established security definitions. As a result, we obtain a complex hierarchy which can be represented as a product of three directed graphs. The three graphs in turn represent the different flavors of security: the security notion, the security model and the level of reusability.  In addition to defining new security classes, we improve the definition of side-information function, which has a central role in defining the security of a garbling scheme. The information allowed to be leaked by the garbled algorithm and the garbled input depend on the representation of the algorithm. The established definition of side-information models the side-information of circuits perfectly but does not model side-information of Turing machines as well. The established model requires that the length of the argument, the length of the final result and the length of the function can be efficiently computable from the side-information function. Moreover, the side-information depends only on the function. In other words, the length of the argument, the length of the final result and the length of the function should only depend on the function. For circuits this is a natural requirement since the number of input wires tells the size of the argument, the number of output wires tells the size of the final result and the number of gates and wires tell the size of the function. On the other hand, the description of a Turing machine does not set any limitation to the size of the argument. Therefore, side-information that depends only on the function cannot provide information about the length of the argument. To tackle this problem, we extend the model of side-information so that side-information depends on both the function and the argument. The new model of side information allows us to define new security classes. We show that the old security classes are compatible with the new model of side-information. We also prove relations between the new security classes.</p

    Similar works