85 research outputs found

    Approximate Sparse Recovery: Optimizing Time and Measurements

    Full text link
    An approximate sparse recovery system consists of parameters k,Nk,N, an mm-by-NN measurement matrix, Φ\Phi, and a decoding algorithm, D\mathcal{D}. Given a vector, xx, the system approximates xx by x^=D(Φx)\widehat x =\mathcal{D}(\Phi x), which must satisfy x^x2Cxxk2\| \widehat x - x\|_2\le C \|x - x_k\|_2, where xkx_k denotes the optimal kk-term approximation to xx. For each vector xx, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number mm of measurements and the runtime of the decoding algorithm, D\mathcal{D}. In this paper, we give a system with m=O(klog(N/k))m=O(k \log(N/k)) measurements--matching a lower bound, up to a constant factor--and decoding time O(klogcN)O(k\log^c N), matching a lower bound up to log(N)\log(N) factors. We also consider the encode time (i.e., the time to multiply Φ\Phi by xx), the time to update measurements (i.e., the time to multiply Φ\Phi by a 1-sparse xx), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(N)\log(N) factors

    Symmetric-key Corruption Detection : When XOR-MACs Meet Combinatorial Group Testing

    Get PDF
    We study a class of MACs, which we call corruption detectable MAC, that is able to not only check the integrity of the whole message, but also detect a part of the message that is corrupted. It can be seen as an application of the classical Combinatorial Group Testing (CGT) to message authentication. However, previous work on this application has inherent limitation in communication. We present a novel approach to combine CGT and a class of linear MACs (XOR-MAC) that enables to break this limit. Our proposal, XOR-GTM, has a significantly smaller communication cost than any of the previous ones, keeping the same corruption detection capability. Our numerical examples for storage application show a reduction of communication by a factor of around 15 to 70 compared with previous schemes. XOR-GTM is parallelizable and is as efficient as standard MACs. We prove that XOR-GTM is provably secure under the standard pseudorandomness assumptions

    Efficient Message Authentication Codes with Combinatorial Group Testing

    Get PDF
    Message authentication code, MAC for short, is a symmetric-key cryptographic function for authenticity. A standard MAC verification only tells whether the message is valid or invalid, and thus we can not identify which part is corrupted in case of invalid message. In this paper we study a class of MAC functions that enables to identify the part of corruption, which we call group testing MAC (GTM). This can be seen as an application of a classical (non-adaptive) combinatorial group testing to MAC. Although the basic concept of GTM (or its keyless variant) has been proposed in various application areas, such as data forensics and computer virus testing, they rather treat the underlying MAC function as a black box, and exact computation cost for GTM seems to be overlooked. In this paper, we study the computational aspect of GTM, and show that a simple yet non-trivial extension of parallelizable MAC (PMAC) enables O(m+t)O(m+t) computation for mm data items and tt tests, irrespective of the underlying test matrix we use, under a natural security model. This greatly improves efficiency from naively applying a black-box MAC for each test, which requires O(mt)O(mt) time. Based on existing group testing methods, we also present experimental results of our proposal and observe that ours runs as fast as taking single MAC tag, with speed-up from the conventional method by factor around 8 to 15 for m=104m=10^4 to 10510^5 items

    FAst in-network GraY failure detection for ISPs

    Get PDF
    Avoiding packet loss is crucial for ISPs. Unfortunately, malfunctioning hardware at ISPs can cause long-lasting packet drops, also known as gray failures, which are undetectable by existing monitoring tools. In this paper, we describe the design and implementation of FANcY, an ISP-targeted system that detects and localizes gray failures quickly and accurately. FANcY complements previous monitoring approaches, which are mainly tailored for low-delay networks such as data center networks and do not work at ISP scale. We experimentally confirm FANcY's capability to accurately detect gray failures in seconds, as long as only tiny fractions of traffic experience losses. We also implement FANcY in an Intel Tofino switch, demonstrating how it enables fine-grained fast rerouting

    Novel framework for optimized digital forensic for mitigating complex image attacks

    Get PDF
    Digital Image Forensic is significantly becoming popular owing to the increasing usage of the images as a media of information propagation. However, owing to the presence of various image editing tools and softwares, there is also an increasing threats over image content security. Reviewing the existing approaches of identify the traces or artifacts states that there is a large scope of optimization to be implmentation to further enhance teh processing. Therfore, this paper presents a novel framework that performs cost effective optmization of digital forensic tehnqiue with an idea of accurately localizing teh area of tampering as well as offers a capability to mitigate the attacks of various form. The study outcome shows that propsoed system offers better outcome in contrast to existing system to a significant scale to prove that minor novelty in design attribute could induce better improvement with respect to accuracy as well as resilience toward all potential image threats

    Using combinatorial group testing to solve integrity issues

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2015O uso de documentos eletrônicos para compartilhar informações é de fundamental importância, assim como a garantia de integridade e autenticidade dos mesmos. Para provar que alguém é dono ou concorda com o conteúdo de um documento em papel, essa pessoa precisa assiná-lo. Se o documento foi modificado após a assinatura, geralmente é possível localizar essas modificações através de rasuras. Existem técnicas similares em documentos digitais, conhecidas como assinaturas digitais, porém, propriedades como as de identificar as modificações são perdidas.Ao determinar quais partes de um documento foram modificadas, o receptor da mensagem seria capaz de verificar se essas modificações ocorreram em partes importantes, irrelevantes ou até esperadas do documento. Em algumas aplicações, uma quantidade limitada de modificações são permitidas mas é necessário manter o controle do local em que elas ocorreram, como em formulários eletrônicos. Em outras aplicações modificações não são permitidas, mas é importante poder acessar partes das informações que tem integridade garantida ou até mesmo utilizar a localização das modificações para investigação.Neste trabalho é considerado o problema de garantia parcial de integridade e autenticidade de dados assinados. Dois cenários são estudados: o primeiro está relacionado com a localização de modificações em um documento assinado e o segundo está relacionado com a localização de assinaturas inválidas em um conjunto de dados assinados individualmente. No primeiro cenário é proposto um esquema de assinatura digital capaz de detectar e localizar modificações num documento. O documento a ser assinado é primeiramente dividido em n blocos, tendo em conta um limite d para a quantidade máxima de blocos modificados que o esquema de assinatura consegue localizar. São propostos algoritmos eficientes para as etapas de assinatura e verificação, resultando em uma assinatura de tamanho razoavelmente compacto. Por exemplo, para d fixo, são adicionados O(log n) hashes ao tamanho de uma assinatura tradicional, ao mesmo tempo permitindo a identificação de até d blocos modificados.No cenário de localização de assinaturas inválidas em um conjunto de dados assinados individualmente é introduzido o conceito de níveis de agregação de assinatura. Com esse método o verificador pode distinguir os dados válidos dos inválidos, em contraste com a agregação de assinaturas tradicional, na qual até mesmo um único dado modificado invalidaria todo o conjunto de dados. Além disso, o número de assinaturas transmitidas é muito menor que num método de verificação em lotes, que requer o envio de todas as assinaturas individualmente. Nesse cenário é estudada uma aplicação em bancos de dados terceirizados, onde cada tupla armazenada é individualmente assinada. Como resultado de uma consulta ao banco de dados, são retornadas n tuplas e um conjunto de t assinaturas agregadas pelo servidor (com t muito menor que n). Quem realizou a consulta executa até t verificações de assinatura de maneira a verificar a integridade das n tuplas. Mesmo que algumas dessas tuplas sejam inválidas, pode-se identificar exatamente quais são as tuplas válidas. São propostos algoritmos eficientes para agregar, verificar as assinaturas e identificar as tuplas modificadas.Os dois esquemas propostos são baseados em testes combinatórios de grupo e matrizes cover-free. Nesse contexto são apresentadas construções detalhadas de matrizes cover-free presentes na literatura e a aplicação das mesmas nos esquemas propostos. Finalmente, são apresentadas análises de complexidade e resultados experimentais desses esquemas, comprovando a sua eficiência. Abstract : We consider the problem of partially ensuring the integrity and authenticity of signed data. Two scenarios are considered: the first is related to locating modifications in a signed document, and the second is related to locating invalid signatures in a set of individually signed data. In the first scenario we propose a digital signature scheme capable of locating modifications in a document. We divide the document to be signed into n blocks and assume a threshold d for the maximum amount of modified blocks that the signature scheme can locate. We propose efficient algorithms for signature and verification steps which provide a reasonably compact signature size. For instance, for fixed d we increase the size of a traditional signature by adding a factor of O(log n) hashes, while providing the identification of up to d modified blocks. In the scenario of locating invalid signatures in a set of individually signed data we introduce the concept of levels of signature aggregation. With this method the verifier can distinguish the valid data from the invalid ones, in contrast to traditional aggregation, where even a single invalid piece of data would invalidate the whole set. Moreover, the number of signatures transmitted is much smaller than in a batch verification method, which requires sending all the signatures individually. We consider an application in outsourced databases in which every tuple stored is individually signed. As a result from a query in the database, we return n tuples and a set of t signatures aggregated by the database server (with t much smaller than n). The querier performs t signature verifications in order to verify the integrity of all n tuples. Even if some of the tuples were modified, we can identify exactly which ones are valid. We provide efficient algorithms to aggregate, verify and identify the modified tuples. Both schemes are based on nonadaptive combinatorial group testing and cover-free matrices

    Forwarding fault detection in wireless community networks

    Get PDF
    Wireless community networks (WCN) are specially vulnerable to routing forwarding failures because of their intrinsic characteristics: use of inexpensive hardware that can be easily accessed; managed in a decentralized way, sometimes by non-expert administrators, and open to everyone; making it prone to hardware failures, misconfigurations and malicious attacks. To increase routing robustness in WCN, we propose a detection mechanism to detect faulty routers, so that the problem can be tackled. Forwarding fault detection can be explained as a 4 steps process: first, there is the need of monitoring and summarizing the traffic observed; then, the traffic summaries are shared among peers, so that evaluation of a router's behavior can be done by analyzing all the relevant traffic summaries; finally, once the faulty nodes have been detected a response mechanism is triggered to solve the issue. The contributions of this thesis focus on the first three steps of this process, providing solutions adapted to Wireless Community Networks that can be deployed without the need of modifying its current network stack. First, we study and characterize the distribution of the error of sketches, a traffic summary function that is resilient to packet dropping, modification and creation and provides better estimations than sampling. We define a random process to describe the estimation for each sketch type, which allows us to provide tighter bounds on the sketch accuracy and choose the size of the sketch more accurately for a set of given requirements on the estimation accuracy. Second, we propose KDet, a traffic summary dissemination and detection protocol that, unlike previous solutions, is resilient to collusion and false accusation without the need of knowing a packet's path. Finally, we consider the case of nodes with unsynchronized clocks and we propose a traffic validation mechanism based on sketches that is capable of discerning between faulty and non-faulty nodes even when the traffic summaries are misaligned, i.e. they refer to slightly different intervals of time.Las redes comunitarias son especialmente vulnerables a errores en la retransmisión de paquetes de red, puesto que están formadas por equipos de gama baja, que pueden ser fácilmente accedidos por extraños; están gestionados de manera distribuida y no siempre por expertos, y además están abiertas a todo el mundo; con lo que de manera habitual presentan errores de hardware o configuración y son sensibles a ataques maliciosos. Para mejorar la robustez en el enrutamiento en estas redes, proponemos el uso de un mecanismo de detección de routers defectuosos, para así poder corregir el problema. La detección de fallos de enrutamiento se puede explicar como un proceso de 4 pasos: el primero es monitorizar el tráfico existente, manteniendo desde cada punto de observación un resumen sobre el tráfico observado; después, estos resumenes se comparten entre los diferentes nodos, para que podamos llevar a cabo el siguiente paso: la evaluación del comportamiento de cada nodo. Finalmente, una vez hemos detectado los nodos maliciosos o que fallan, debemos actuar con un mecanismo de respuesta que corrija el problema. Esta tesis se concentra en los tres primeros pasos, y proponemos una solución para cada uno de ellos que se adapta al contexto de las redes comunitarias, de tal manera que se puede desplegar en ellas sin la necesidad de modificar los sistemas y protocolos de red ya existentes. Respecto a los resumenes de tráfico, presentamos un estudio y caracterización de la distribución de error de los sketches, una estructura de datos que es capaz de resumir flujos de tráfico resistente a la pérdida, manipulación y creación de paquetes y que además tiene mejor resolución que el muestreo. Para cada tipo de sketch, definimos una función de distribución que caracteriza el error cometido, de esta manera somos capaces de determinar con más precisión el tamaño del sketch requerido bajo unos requisitos de falsos positivos y negativos. Después proponemos KDet, un protocolo de diseminación de resumenes de tráfico y detección de nodos erróneos que, a diferencia de protocolos propuestos anteriormente, no require conocer el camino de cada paquete y es resistente a la confabulación de nodos maliciosos. Por último, consideramos el caso de nodos con relojes desincronizados, y proponemos un mecanismo de detección basado en sketches, capaz de discernir entre los nodos erróneos y correctos, aún a pesar del desalineamiento de los sketches (es decir, a pesar del que estos se refieran a momentos de tiempo ligeramente diferentes)
    corecore