1,011 research outputs found

    Dirty Paper Arbitrarily Varying Channel with a State-Aware Adversary

    Full text link
    In this paper, we take an arbitrarily varying channel (AVC) approach to examine the problem of writing on a dirty paper in the presence of an adversary. We consider an additive white Gaussian noise (AWGN) channel with an additive white Gaussian state, where the state is known non-causally to the encoder and the adversary, but not the decoder. We determine the randomized coding capacity of this AVC under the maximal probability of error criterion. Interestingly, it is shown that the jamming adversary disregards the state knowledge to choose a white Gaussian channel input which is independent of the state

    On AVCs with Quadratic Constraints

    Full text link
    In this work we study an Arbitrarily Varying Channel (AVC) with quadratic power constraints on the transmitter and a so-called "oblivious" jammer (along with additional AWGN) under a maximum probability of error criterion, and no private randomness between the transmitter and the receiver. This is in contrast to similar AVC models under the average probability of error criterion considered in [1], and models wherein common randomness is allowed [2] -- these distinctions are important in some communication scenarios outlined below. We consider the regime where the jammer's power constraint is smaller than the transmitter's power constraint (in the other regime it is known no positive rate is possible). For this regime we show the existence of stochastic codes (with no common randomness between the transmitter and receiver) that enables reliable communication at the same rate as when the jammer is replaced with AWGN with the same power constraint. This matches known information-theoretic outer bounds. In addition to being a stronger result than that in [1] (enabling recovery of the results therein), our proof techniques are also somewhat more direct, and hence may be of independent interest.Comment: A shorter version of this work will be send to ISIT13, Istanbul. 8 pages, 3 figure

    An Empirical Analysis of Privacy in Cryptocurrencies

    Get PDF
    Cryptocurrencies have emerged as an important technology over the past decade and have, undoubtedly, become blockchain’s most popular application. Bitcoin has been by far the most popular out of the thousands of cryptocurrencies that have been created. Some of the features that made Bitcoin such a fascinating technology include its transactions being made publicly available and permanently stored, and the ability for anyone to have access. Despite this transparency, it was initially believed that Bitcoin provides anonymity to its users, since it allowed them to transact using a pseudonym instead of their real identity. However, a long line of research has shown that this initial belief was false and that, given the appropriate tools, Bitcoin transactions can indeed be traced back to the real-life entities performing them. In this thesis, we perform a survey to examine the anonymity aspect of cryptocurrencies. We start with early works that made first efforts on analysing how private this new technology was. We analyse both from the perspective of a passive observer with eyes only to the public immutable state of transactions, the blockchain, as well as from an observer who has access to network layer information. We then look into the projects that aimed to enhance the anonymity provided in cryptocurrencies and also analyse the evidence of how much they succeeded in practice. In the first part of our own contributions we present our own take on Bitcoin’s anonymity, inspired by the research already in place. We manage to extend existing heuristics and provide a novel methodology on measuring the confidence we have in our anonymity metrics, instead of looking into the issue from a binary perspective, as in previous research. In the second part we provide the first full-scale empirical work on measuring anonymity in a cryptocurrency that was built with privacy guarantees, based on a very well established cryptography, Zcash. We show that just building a tool which provides anonymity in theory is very different than the privacy offered in practice once users start to transact with it. Finally, we look into a technology that is not a cryptocurrency itself but is built on top of Bitcoin, thus providing a so-called layer 2 solution, the Lightning network. Again, our measurements showed some serious privacy concerns of this technology, some of which were novel and highly applicable

    Tradeoffs between Anonymity and Quality of Services in Data Networking and Signaling Games

    Get PDF
    Timing analysis has long been used to compromise users\u27 anonymity in networks. Even when data is encrypted, an adversary can track flows from sources to the corresponding destinations by merely using the correlation between the inter-packet timing on incoming and outgoing streams at intermediate routers. Anonymous network systems, where users communicate without revealing their identities, rely on the idea of Chaum mixing to hide `networking information\u27. Chaum mixes are routers or proxy servers that randomly reorder the outgoing packets to prevent an eavesdropper from tracking the flow of packets. The effectiveness of such mixing strategies is, however, diminished under constraints on network Quality of Services (QoS)s such as memory, bandwidth, and fairness. In this work, two models for studying anonymity, packet based anonymity and flow based anonymity, are proposed to address these issues quantitatively and a trade-off between network constraints and achieved anonymity is studied. Packet based anonymity model is proposed to study the short burst traffic arrival models of users such as in web browsing. For packet based anonymity, an information theoretic investigation of mixes under memory constraint and fairness constraint is established. Specifically, for memory constrained mixes, the first single letter characterization of the maximum achievable anonymity for a mix serving two users with equal arrival rates is provided. Further, for two users with unequal arrival rates the anonymity is expressed as a solution to a series of finite recursive equations. In addition, for more than two users and arbitrary arrival rates, a lower bound on the convergence rate of anonymity is derived as buffer size increases and it is shown that under certain arrival configurations the lower bound is tight. The adverse effects of requirement of fairness in data networking on anonymous networking is also studied using the packet based anonymity model and a novel temporal fairness index is proposed to compare the tradeoff between fairness and achieved anonymity of three diverse and popular fairness paradigms: First Come First Serve, Fair Queuing and Proportional Method. It is shown that FCFS and Fair Queuing algorithms have little inherent anonymity. A significant improvement in anonymity is therefore achieved by relaxing the fairness paradigms. The analysis of the relaxed FCFS criterion, in particular, is accomplished by modeling the problem as a Markov Decision Process (MDP). The proportional method of scheduling, while avoided in networks today, is shown to significantly outperform the other fair scheduling algorithms in anonymity, and is proven to be asymptotically optimal as the buffer size of the scheduler is increased. Flow based anonymity model is proposed to study long streams traffic models of users such as in media streaming. A detection theoretic measure of anonymity is proposed to study the optimization of mixing strategies under network constraints for this flow based anonymity model. Specifically, using the detection time of the adversary as a metric, the effectiveness of mixing strategies is maximized under constraints on memory and throughput. A general game theoretic model is proposed to study the mixing strategies when an adversary is capable of capturing a fraction of incoming packets. For the proposed multistage game, existence of a Nash equilibrium is proven, and the optimal strategies for the mix and adversary were derived at the equilibrium condition.It is noted in this work that major literature on anonymity in Internet is focused on achieving anonymity using third parties like mixes or onion routers, while the contributions of users\u27 individual actions such as accessing multiple websites to hide the targeted websites, using multiple proxy servers to hide the traffic routes are overlooked. In this thesis, signaling game model is proposed to study specifically these kind of problems. Fundamentally, signaling games consist of two players: senders and receivers and each sender belongs to one of multiple types. The users who seek to achieve anonymity are modeled as the sender of a signaling game and their types are identified by their personal information that they want to hide. The eavesdroppers are modeled as the receiver of the signaling game. Senders transmit their messages to receivers. The transmission of these messages can be seen as inevitable actions that a user have to take in his/her daily life, like the newspapers he/she subscribes on the Internet, online shopping that he/she does, but these messages are susceptible to reveal the user identity such as his/her political affiliation or his/her affluence level. The receiver (eavesdropper) uses these messages to interpret the senders\u27 type and take optimal actions according to his belief of senders\u27 type. Senders choose their messages to increase their reward given that they know the optimal policies of the receivers for choosing the action based on the transmitted message. However, sending the messages that increases senders\u27 reward may reveal their type to receivers thus violating their privacy and can be used by eavesdropper in future to harm the senders. In this work, the payoff of a signalling game is adjusted to incorporate the information revealed to an eavesdropper such that this information leakage is minimized from the users\u27 perspective. The existence of Bayesian-Nash equilibrium is proven in this work for the signaling games even after the incorporation of users\u27 anonymity. It is also proven that the equilibrium point is unique if the desired anonymity is below a certain threshold

    Protecting applications using trusted execution environments

    Get PDF
    While cloud computing has been broadly adopted, companies that deal with sensitive data are still reluctant to do so due to privacy concerns or legal restrictions. Vulnerabilities in complex cloud infrastructures, resource sharing among tenants, and malicious insiders pose a real threat to the confidentiality and integrity of sensitive customer data. In recent years trusted execution environments (TEEs), hardware-enforced isolated regions that can protect code and data from the rest of the system, have become available as part of commodity CPUs. However, designing applications for the execution within TEEs requires careful consideration of the elevated threats that come with running in a fully untrusted environment. Interaction with the environment should be minimised, but some cooperation with the untrusted host is required, e.g. for disk and network I/O, via a host interface. Implementing this interface while maintaining the security of sensitive application code and data is a fundamental challenge. This thesis addresses this challenge and discusses how TEEs can be leveraged to secure existing applications efficiently and effectively in untrusted environments. We explore this in the context of three systems that deal with the protection of TEE applications and their host interfaces: SGX-LKL is a library operating system that can run full unmodified applications within TEEs with a minimal general-purpose host interface. By providing broad system support inside the TEE, the reliance on the untrusted host can be reduced to a minimal set of low-level operations that cannot be performed inside the enclave. SGX-LKL provides transparent protection of the host interface and for both disk and network I/O. Glamdring is a framework for the semi-automated partitioning of TEE applications into an untrusted and a trusted compartment. Based on source-level annotations, it uses either dynamic or static code analysis to identify sensitive parts of an application. Taking into account the objectives of a small TCB size and low host interface complexity, it defines an application-specific host interface and generates partitioned application code. EnclaveDB is a secure database using Intel SGX based on a partitioned in-memory database engine. The core of EnclaveDB is its logging and recovery protocol for transaction durability. For this, it relies on the database log managed and persisted by the untrusted database server. EnclaveDB protects against advanced host interface attacks and ensures the confidentiality, integrity, and freshness of sensitive data.Open Acces

    Segurança de computadores por meio de autenticação intrínseca de hardware

    Get PDF
    Orientadores: Guido Costa Souza de Araújo, Mario Lúcio Côrtes e Diego de Freitas AranhaTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Neste trabalho apresentamos Computer Security by Hardware-Intrinsic Authentication (CSHIA), uma arquitetura de computadores segura para sistemas embarcados que tem como objetivo prover autenticidade e integridade para código e dados. Este trabalho está divido em três fases: Projeto da Arquitetura, sua Implementação, e sua Avaliação de Segurança. Durante a fase de projeto, determinamos como integridade e autenticidade seriam garantidas através do uso de Funções Fisicamente Não Clonáveis (PUFs) e propusemos um algoritmo de extração de chaves criptográficas de memórias cache de processadores. Durante a implementação, flexibilizamos o projeto da arquitetura para fornecer diferentes possibilidades de configurações sem comprometimento da segurança. Então, avaliamos seu desempenho levando em consideração o incremento em área de chip, aumento de consumo de energia e memória adicional para diferentes configurações. Por fim, analisamos a segurança de PUFs e desenvolvemos um novo ataque de canal lateral que circunvê a propriedade de unicidade de PUFs por meio de seus elementos de construçãoAbstract: This work presents Computer Security by Hardware-Intrinsic Authentication (CSHIA), a secure computer architecture for embedded systems that aims at providing authenticity and integrity for code and data. The work encompassed three phases: Design, Implementation, and Security Evaluation. In design, we laid out the basic ideas behind CSHIA, namely, how integrity and authenticity are employed through the use of Physical Unclonable Functions (PUFs), and we proposed an algorithm to extract cryptographic keys from the intrinsic memories of processors. In implementation, we made CSHIA¿s design more flexible, allowing different configurations without compromising security. Then, we evaluated CSHIA¿s performance and overheads, such as area, energy, and memory, for multiple configurations. Finally, we evaluated security of PUFs, which led us to develop a new side-channel-based attack that enabled us to circumvent PUFs¿ uniqueness property through their architectural elementsDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2015/06829-2; 2016/25532-3147614/2014-7FAPESPCNP
    corecore