64 research outputs found

    Approximate Sparse Recovery: Optimizing Time and Measurements

    Full text link
    An approximate sparse recovery system consists of parameters k,Nk,N, an mm-by-NN measurement matrix, Φ\Phi, and a decoding algorithm, D\mathcal{D}. Given a vector, xx, the system approximates xx by x^=D(Φx)\widehat x =\mathcal{D}(\Phi x), which must satisfy x^x2Cxxk2\| \widehat x - x\|_2\le C \|x - x_k\|_2, where xkx_k denotes the optimal kk-term approximation to xx. For each vector xx, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number mm of measurements and the runtime of the decoding algorithm, D\mathcal{D}. In this paper, we give a system with m=O(klog(N/k))m=O(k \log(N/k)) measurements--matching a lower bound, up to a constant factor--and decoding time O(klogcN)O(k\log^c N), matching a lower bound up to log(N)\log(N) factors. We also consider the encode time (i.e., the time to multiply Φ\Phi by xx), the time to update measurements (i.e., the time to multiply Φ\Phi by a 1-sparse xx), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(N)\log(N) factors

    Symmetric-key Corruption Detection : When XOR-MACs Meet Combinatorial Group Testing

    Get PDF
    We study a class of MACs, which we call corruption detectable MAC, that is able to not only check the integrity of the whole message, but also detect a part of the message that is corrupted. It can be seen as an application of the classical Combinatorial Group Testing (CGT) to message authentication. However, previous work on this application has inherent limitation in communication. We present a novel approach to combine CGT and a class of linear MACs (XOR-MAC) that enables to break this limit. Our proposal, XOR-GTM, has a significantly smaller communication cost than any of the previous ones, keeping the same corruption detection capability. Our numerical examples for storage application show a reduction of communication by a factor of around 15 to 70 compared with previous schemes. XOR-GTM is parallelizable and is as efficient as standard MACs. We prove that XOR-GTM is provably secure under the standard pseudorandomness assumptions

    Efficient Message Authentication Codes with Combinatorial Group Testing

    Get PDF
    Message authentication code, MAC for short, is a symmetric-key cryptographic function for authenticity. A standard MAC verification only tells whether the message is valid or invalid, and thus we can not identify which part is corrupted in case of invalid message. In this paper we study a class of MAC functions that enables to identify the part of corruption, which we call group testing MAC (GTM). This can be seen as an application of a classical (non-adaptive) combinatorial group testing to MAC. Although the basic concept of GTM (or its keyless variant) has been proposed in various application areas, such as data forensics and computer virus testing, they rather treat the underlying MAC function as a black box, and exact computation cost for GTM seems to be overlooked. In this paper, we study the computational aspect of GTM, and show that a simple yet non-trivial extension of parallelizable MAC (PMAC) enables O(m+t)O(m+t) computation for mm data items and tt tests, irrespective of the underlying test matrix we use, under a natural security model. This greatly improves efficiency from naively applying a black-box MAC for each test, which requires O(mt)O(mt) time. Based on existing group testing methods, we also present experimental results of our proposal and observe that ours runs as fast as taking single MAC tag, with speed-up from the conventional method by factor around 8 to 15 for m=104m=10^4 to 10510^5 items

    Using combinatorial group testing to solve integrity issues

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2015O uso de documentos eletrônicos para compartilhar informações é de fundamental importância, assim como a garantia de integridade e autenticidade dos mesmos. Para provar que alguém é dono ou concorda com o conteúdo de um documento em papel, essa pessoa precisa assiná-lo. Se o documento foi modificado após a assinatura, geralmente é possível localizar essas modificações através de rasuras. Existem técnicas similares em documentos digitais, conhecidas como assinaturas digitais, porém, propriedades como as de identificar as modificações são perdidas.Ao determinar quais partes de um documento foram modificadas, o receptor da mensagem seria capaz de verificar se essas modificações ocorreram em partes importantes, irrelevantes ou até esperadas do documento. Em algumas aplicações, uma quantidade limitada de modificações são permitidas mas é necessário manter o controle do local em que elas ocorreram, como em formulários eletrônicos. Em outras aplicações modificações não são permitidas, mas é importante poder acessar partes das informações que tem integridade garantida ou até mesmo utilizar a localização das modificações para investigação.Neste trabalho é considerado o problema de garantia parcial de integridade e autenticidade de dados assinados. Dois cenários são estudados: o primeiro está relacionado com a localização de modificações em um documento assinado e o segundo está relacionado com a localização de assinaturas inválidas em um conjunto de dados assinados individualmente. No primeiro cenário é proposto um esquema de assinatura digital capaz de detectar e localizar modificações num documento. O documento a ser assinado é primeiramente dividido em n blocos, tendo em conta um limite d para a quantidade máxima de blocos modificados que o esquema de assinatura consegue localizar. São propostos algoritmos eficientes para as etapas de assinatura e verificação, resultando em uma assinatura de tamanho razoavelmente compacto. Por exemplo, para d fixo, são adicionados O(log n) hashes ao tamanho de uma assinatura tradicional, ao mesmo tempo permitindo a identificação de até d blocos modificados.No cenário de localização de assinaturas inválidas em um conjunto de dados assinados individualmente é introduzido o conceito de níveis de agregação de assinatura. Com esse método o verificador pode distinguir os dados válidos dos inválidos, em contraste com a agregação de assinaturas tradicional, na qual até mesmo um único dado modificado invalidaria todo o conjunto de dados. Além disso, o número de assinaturas transmitidas é muito menor que num método de verificação em lotes, que requer o envio de todas as assinaturas individualmente. Nesse cenário é estudada uma aplicação em bancos de dados terceirizados, onde cada tupla armazenada é individualmente assinada. Como resultado de uma consulta ao banco de dados, são retornadas n tuplas e um conjunto de t assinaturas agregadas pelo servidor (com t muito menor que n). Quem realizou a consulta executa até t verificações de assinatura de maneira a verificar a integridade das n tuplas. Mesmo que algumas dessas tuplas sejam inválidas, pode-se identificar exatamente quais são as tuplas válidas. São propostos algoritmos eficientes para agregar, verificar as assinaturas e identificar as tuplas modificadas.Os dois esquemas propostos são baseados em testes combinatórios de grupo e matrizes cover-free. Nesse contexto são apresentadas construções detalhadas de matrizes cover-free presentes na literatura e a aplicação das mesmas nos esquemas propostos. Finalmente, são apresentadas análises de complexidade e resultados experimentais desses esquemas, comprovando a sua eficiência. Abstract : We consider the problem of partially ensuring the integrity and authenticity of signed data. Two scenarios are considered: the first is related to locating modifications in a signed document, and the second is related to locating invalid signatures in a set of individually signed data. In the first scenario we propose a digital signature scheme capable of locating modifications in a document. We divide the document to be signed into n blocks and assume a threshold d for the maximum amount of modified blocks that the signature scheme can locate. We propose efficient algorithms for signature and verification steps which provide a reasonably compact signature size. For instance, for fixed d we increase the size of a traditional signature by adding a factor of O(log n) hashes, while providing the identification of up to d modified blocks. In the scenario of locating invalid signatures in a set of individually signed data we introduce the concept of levels of signature aggregation. With this method the verifier can distinguish the valid data from the invalid ones, in contrast to traditional aggregation, where even a single invalid piece of data would invalidate the whole set. Moreover, the number of signatures transmitted is much smaller than in a batch verification method, which requires sending all the signatures individually. We consider an application in outsourced databases in which every tuple stored is individually signed. As a result from a query in the database, we return n tuples and a set of t signatures aggregated by the database server (with t much smaller than n). The querier performs t signature verifications in order to verify the integrity of all n tuples. Even if some of the tuples were modified, we can identify exactly which ones are valid. We provide efficient algorithms to aggregate, verify and identify the modified tuples. Both schemes are based on nonadaptive combinatorial group testing and cover-free matrices

    Image-set, Temporal and Spatiotemporal Representations of Videos for Recognizing, Localizing and Quantifying Actions

    Get PDF
    This dissertation addresses the problem of learning video representations, which is defined here as transforming the video so that its essential structure is made more visible or accessible for action recognition and quantification. In the literature, a video can be represented by a set of images, by modeling motion or temporal dynamics, and by a 3D graph with pixels as nodes. This dissertation contributes in proposing a set of models to localize, track, segment, recognize and assess actions such as (1) image-set models via aggregating subset features given by regularizing normalized CNNs, (2) image-set models via inter-frame principal recovery and sparsely coding residual actions, (3) temporally local models with spatially global motion estimated by robust feature matching and local motion estimated by action detection with motion model added, (4) spatiotemporal models 3D graph and 3D CNN to model time as a space dimension, (5) supervised hashing by jointly learning embedding and quantization, respectively. State-of-the-art performances are achieved for tasks such as quantifying facial pain and human diving. Primary conclusions of this dissertation are categorized as follows: (i) Image set can capture facial actions that are about collective representation; (ii) Sparse and low-rank representations can have the expression, identity and pose cues untangled and can be learned via an image-set model and also a linear model; (iii) Norm is related with recognizability; similarity metrics and loss functions matter; (v) Combining the MIL based boosting tracker with the Particle Filter motion model induces a good trade-off between the appearance similarity and motion consistence; (iv) Segmenting object locally makes it amenable to assign shape priors; it is feasible to learn knowledge such as shape priors online from Web data with weak supervision; (v) It works locally in both space and time to represent videos as 3D graphs; 3D CNNs work effectively when inputted with temporally meaningful clips; (vi) the rich labeled images or videos help to learn better hash functions after learning binary embedded codes than the random projections. In addition, models proposed for videos can be adapted to other sequential images such as volumetric medical images which are not included in this dissertation
    corecore