11 research outputs found

    Quickest Sequence Phase Detection

    Full text link
    A phase detection sequence is a length-nn cyclic sequence, such that the location of any length-kk contiguous subsequence can be determined from a noisy observation of that subsequence. In this paper, we derive bounds on the minimal possible kk in the limit of n→∞n\to\infty, and describe some sequence constructions. We further consider multiple phase detection sequences, where the location of any length-kk contiguous subsequence of each sequence can be determined simultaneously from a noisy mixture of those subsequences. We study the optimal trade-offs between the lengths of the sequences, and describe some sequence constructions. We compare these phase detection problems to their natural channel coding counterparts, and show a strict separation between the fundamental limits in the multiple sequence case. Both adversarial and probabilistic noise models are addressed.Comment: To appear in the IEEE Transactions on Information Theor

    Topics on Reliable and Secure Communication using Rank-Metric and Classical Linear Codes

    Get PDF

    Segmented GRAND: Combining Sub-patterns in Near-ML Order

    Full text link
    The recently introduced maximum-likelihood (ML) decoding scheme called guessing random additive noise decoding (GRAND) has demonstrated a remarkably low time complexity in high signal-to-noise ratio (SNR) regimes. However, the complexity is not as low at low SNR regimes and low code rates. To mitigate this concern, we propose a scheme for a near-ML variant of GRAND called ordered reliability bits GRAND (or ORBGRAND), which divides codewords into segments based on the properties of the underlying code, generates sub-patterns for each segment consistent with the syndrome (thus reducing the number of inconsistent error patterns generated), and combines them in a near-ML order using two-level integer partitions of logistic weight. The numerical evaluation demonstrates that the proposed scheme, called segmented ORBGRAND, significantly reduces the average number of queries at any SNR regime. Moreover, the segmented ORBGRAND with abandonment also improves the error correction performance

    Sur des algorithmes de décodage de codes géométriques au delà de la moitié de la distance minimale

    Get PDF
    This thesis deals with algebraic geometric (AG) codes and theirdecoding. Those codes are composed of vectors constructed by evaluatingspecific functions at points of an algebraic curve. The underlyingalgebraic structure of these codes made it possible to design severaldecoding algorithms. A first one, for codes from plane curves isproposed in 1989 by Justesen, Larsen, Jensen, Havemose and Hoholdt. Itis then extended to any curve by Skorobatov and Vladut and called"basic algorithm" in the literature. A few years later, Pellikaan andindependently Koetter, give a formulation without algebraic geometryusing simply the language of codes. This new interpretation, takes thename "Error Correcting Pairs" (ECP) algorithm and represents abreakthrough in coding theory since it applies to every code having acertain structure which is described only in terms of component-wiseproducts of codes. The decoding radius of this algorithm depends onthe code to which it is applied. For Reed-Solomon codes, it reacheshalf the minimum distance, which is the threshold for the solution tobe unique. For AG, the algorithm almost always manages todecode a quantity of errors equal to half the designeddistance. However, the success of the algorithm is only guaranteed fora quantity of errors less than half the designed distance minussome multiple curve's genus. Several attempts were thenmade to erase this genus-proportional penalty. A first decisiveresult was that of Pellikaan, who proved the existence of an algorithmwith a decoding radius equal to half the designed distance. Thenin 1993 Ehrhard obtained an effective procedure for constructing such analgorithm.In addition to the algorithms for unique decoding, AG codes havealgorithms correcting amount of errors greater than half thedesigned distance. Beyond this quantity, the uniqueness of thesolution may not be guaranteed. We then use a so-called "listdecoding" algorithm which returns the list of any possiblesolutions. This is the case of Sudan's algorithm for Reed-Solomoncodes. Another approach consists in designing algorithms, whichreturns a single solution but may fail. This is the case ofthe "power decoding". Sudan's and power decoding algorithms have firstbeen designed for Reed-Solomon codes, then extended to AG codes. Weobserve that these extensions do not have the same decoding radii:that of Sudan algorithm is lower than that of the power decoding,the difference being proportional to the genus of the curve.In this thesis we present two main results. First, we propose a newalgorithm that we call "power error locating pairs" which, like theECP algorithm, can be applied to any code with a certain structuredescribed in terms of component-wise products. Compared to the ECPalgorithm, this algorithm can correct errors beyond half thedesigned distance of the code. Applied to Reed-Solomon or to AG codes,it is equivalent to the power decoding algorithm. But it can also beapplied to specific cyclic codes for which it can be used to decodebeyond half the Roos bound. Moreover, this algorithm applied to AGcodes disregards the underlying geometric structure whichopens up interesting applications in cryptanalysis.The second result aims to erase the penalty proportional to thegenus in the decoding radius of Sudan's algorithm forAG codes. First, by following Pellikaan's method, weprove that such an algorithm exists. Then, by combining andgeneralizing the works of Ehrhard and Sudan, we give aneffective procedure to build this algorithm.Cette thèse porte sur les codes géométriques et leur décodage. Cescodes sont constitués de vecteurs d'evaluations de fonctionsspécifiques en des points d'une courbe algébrique. La structurealgébrique sous-jacente de ces codes a permis de concevoir plusieursalgorithmes de décodage. Un premier algorithme pour les codesprovenant de courbes planes est proposé en 1989 par Justesen, Larsen,Jensen, Havemose et Hoholdt. Il est ensuite étendu à toute courbe parSkorobatov et Vladut et appelé "basic algorithm" dans laliterature. Quelques années plus tard, Pellikaan et indépendammentKoetter en donnent une formulation sans géométrie algébrique utilisantsimplement le langage des codes. Cette nouvelle interprétation prendle nom d'algorithme "Error Correcting Pairs" (ECP) et représente unepercée en théorie des codes, car l'algorithme s'applique à toutcode muni d'une certaine structure qui se décrit uniquement entermes de produits coordonnées par coordonnées de codes. Le rayon dedécodage de cet algorithme dépend du code auquel il est appliqué. Pourles codes de Reed-Solomon, il atteint la moitié de la distanceminimale,seuil d'unicité de la solution. Pour les codes géométriques,l'algorithme arrive à décoder presque toujours une quantité d'erreurségale à la moitié de la distance construite. Toutefois, le bonfonctionnement de l'algorithme n'est garanti que pour une quantitéd'erreurs inférieure à la moitié de la distance construite moins unmultiple du genre de la courbe. Plusieurs tentatives ont ensuite été menéespour effacer cette penalité dûe au genre. Un premierrésultat déterminant a été celui de Pellikaan, qui a prouvél'existence d'un algorithme avec rayon de décodage égal à la moitié dela distance construite. Puis,en 1993 Ehrhard est parvenu à uneprocédure effective pour construire un tel algorithme.En plus des algorithmes pour le décodage unique,les codesgéométriques disposent d'algorithmes corrigeant une quantité d'erreurssupérieure à la moitié de la distance construite. Au delà de cettequantité, l'unicité de la solution pourrait ne pas être assurée. Onutilise alors des algorithmes dits de "decodage en liste" quirenvoient la liste des solutions possibles. C'est le cas del'algorithme de Sudan. Une autre approche consiste à concevoirdes algorithmes qui renvoient une unique solution mais peuvent échouer.C'est le cas du "power decoding". Les algorithmes de Sudan etdu power decoding ont d'abord été conçus pour les codes deReed-Solomon,puis étendus aux codes géométriques.On observe que ces extensions n'ont pas les mêmes rayonsde décodage: celui de l'algorithme de Sudan est inférieur à celui duPower decoding, la différence étant proportionnelle au genre de la courbe.Dans cette thèse nous présentons deux résultatsprincipaux. Premièrement, nous proposons un nouvel algorithme que nousappelons "power error locating pairs" qui, comme l'algorithme ECP,peut être appliqué à tout code muni d'une certainestructure se décrivant en termes de produits coordonnées parcoordonnées. Comparé à l'algorithme ECP, cetalgorithme peut corriger des erreurs au delà de la moitié de ladistance construite du code. Appliqué aux codes de Reed--Solomon ou,plus généralement, aux codes géométriques, il est equivalent àl'algorithme du power decoding. Mais il peut aussi être appliqué àdes codes cycliques spécifiques pour lesquels il permet de décoder audelà de la moitié de la borne de Roos. Par ailleurs, cet algorithmeappliqué aux codes géométriques fait abstraction de la structuregéométrique sous-jascente ce qui ouvre d'intéressantes applications encryptanalyse.Le second résultat a pour but d'effacer la penalité proportionnelle augenre dans le rayon de décodage de l'algorithme de Sudan pour lescodes géométriques. D'abord, en suivant la méthode de Pellikaan, nousprouvons que un tel algorithme existe. Puis, engénéralisant les travaux de Ehrhard et Sudan, nous donnons uneprocédure effective pour construire cet algorithme

    Contribution to the construction of fingerprinting and watermarking schemes to protect mobile agents and multimedia content

    Get PDF
    The main characteristic of fingerprinting codes is the need of high error-correction capacity due to the fact that they are designed to avoid collusion attacks which will damage many symbols from the codewords. Moreover, the use of fingerprinting schemes depends on the watermarking system that is used to embed the codeword into the content and how it honors the marking assumption. In this sense, even though fingerprinting codes were mainly used to protect multimedia content, using them on software protection systems seems an option to be considered. This thesis, studies how to use codes which have iterative-decoding algorithms, mainly turbo-codes, to solve the fingerprinting problem. Initially, it studies the effectiveness of current approaches based on concatenating tradicioanal fingerprinting schemes with convolutional codes and turbo-codes. It is shown that these kind of constructions ends up generating a high number of false positives. Even though this thesis contains some proposals to improve these schemes, the direct use of turbo-codes without using any concatenation with a fingerprinting code as inner code has also been considered. It is shown that the performance of turbo-codes using the appropiate constituent codes is a valid alternative for environments with hundreds of users and 2 or 3 traitors. As constituent codes, we have chosen low-rate convolutional codes with maximum free distance. As for how to use fingerprinting codes with watermarking schemes, we have studied the option of using watermarking systems based on informed coding and informed embedding. It has been discovered that, due to different encodings available for the same symbol, its applicability to embed fingerprints is very limited. On this sense, some modifications to these systems have been proposed in order to properly adapt them to fingerprinting applications. Moreover the behavior and impact over a video produced as a collusion of 2 users by the YouTube’s s ervice has been s tudied. We have also studied the optimal parameters for viable tracking of users who have used YouTube and conspired to redistribute copies generated by a collusion attack. Finally, we have studied how to implement fingerprinting schemes and software watermarking to fix the problem of malicious hosts on mobile agents platforms. In this regard, four different alternatives have been proposed to protect the agent depending on whether you want only detect the attack or avoid it in real time. Two of these proposals are focused on the protection of intrusion detection systems based on mobile agents. Moreover, each of these solutions has several implications in terms of infrastructure and complexity.Els codis fingerprinting es caracteritzen per proveir una alta capacitat correctora ja que han de fer front a atacs de confabulació que malmetran una part important dels símbols de la paraula codi. D'atra banda, la utilització de codis de fingerprinting en entorns reals està subjecta a que l'esquema de watermarking que gestiona la incrustació sigui respectuosa amb la marking assumption. De la mateixa manera, tot i que el fingerprinting neix de la protecció de contingut multimèdia, utilitzar-lo en la protecció de software comença a ser una aplicació a avaluar. En aquesta tesi s'ha estudiat com aplicar codis amb des codificació iterativa, concretament turbo-codis, al problema del rastreig de traïdors en el context del fingerprinting digital. Inicialment s'ha qüestionat l'eficàcia dels enfocaments actuals en la utilització de codis convolucionals i turbo-codis que plantegen concatenacions amb esquemes habituals de fingerprinting. S'ha demostrat que aquest tipus de concatenacions portaven, de forma implícita, a una elevada probabilitat d'inculpar un usuari innocent. Tot i que s'han proposat algunes millores sobre aquests esquemes , finalment s'ha plantejat l'ús de turbocodis directament, evitant així la concatenació amb altres esquemes de fingerprinting. S'ha demostrat que, si s'utilitzen els codis constituents apropiats, el rendiment del turbo-descodificador és suficient per a ser una alternativa aplicable en entorns amb varis centenars d'usuaris i 2 o 3 confabuladors . Com a codis constituents s'ha optat pels codis convolucionals de baix ràtio amb distància lliure màxima. Pel que fa a com utilitzar els codis de fingerprinting amb esquemes de watermarking, s'ha estudiat l'opció d'utilitzar sistemes de watermarking basats en la codificació i la incrustació informada. S'ha comprovat que, degut a la múltiple codificació del mateix símbol, la seva aplicabilitat per incrustar fingerprints és molt limitada. En aquest sentit s'ha plantejat algunes modificacions d'aquests sistemes per tal d'adaptar-los correctament a aplicacions de fingerprinting. D'altra banda s'ha avaluat el comportament i l'impacte que el servei de YouTube produeix sobre un vídeo amb un fingerprint incrustat. A més , s'ha estudiat els paràmetres òptims per a fer viable el rastreig d'usuaris que han confabulat i han utilitzat YouTube per a redistribuir la copia fruït de la seva confabulació. Finalment, s'ha estudiat com aplicar els esquemes de fingerprinting i watermarking de software per solucionar el problema de l'amfitrió maliciós en agents mòbils . En aquest sentit s'han proposat quatre alternatives diferents per a protegir l'agent en funció de si és vol només detectar l'atac o evitar-lo en temps real. Dues d'aquestes propostes es centren en la protecció de sistemes de detecció d'intrusions basats en agents mòbils. Cadascuna de les solucions té diverses implicacions a nivell d'infrastructura i de complexitat.Postprint (published version

    Codificación para corrección de errores con aplicación en sistemas de transmisión y almacenamiento de información

    Get PDF
    Tesis (DCI)--FCEFN-UNC, 2013Trata de una técnica de diseño de códigos de chequeo de paridad de baja densidad ( más conocidas por sigla en ingles como LDPC) y un nuevo algoritmo de post- procesamiento para la reducción del piso de erro

    Spectral Estimation for Graph Signals Using Reed-Solomon Decoding

    Get PDF
    Spectral estimation, coding theory and compressed sensing are three important sub-fields of signal processing and information theory. Although these fields developed fairly independently, several important connections between them have been identified. One notable connection between Reed-Solomon(RS) decoding, spectral estimation, and Prony's method of curve fitting was observed by Wolf in 1967. With the recent developments in the area of Graph Signal Processing(GSP), where the signals of interest have high dimensional and irregular structure, a natural and important question to consider is can these connections be extended to spectral estimation for graph signals? Recently, Marques et al, have shown that a bandlimited graph signal that is k-sparse in the Graph Fourier Transform (GFT) domain can be reconstructed from 2k measurements obtained using a dynamic sampling strategy. Inspired by this work, we establish a connection between coding theory and GSP to propose a sparse recovery algorithm for graph signals using methods similar to Berlekamp-Massey algorithm and Forney's algorithm for decoding RS codes. In other words, we develop an equivalent of RS decoding for graph signals. The time complexity of the recovery algorithm is O(k^2) which is independent of the number of nodes N in the graph. The proposed framework has applications in infrastructure networks like communication networks, power grids etc., which involves maximization of the power efficiency of a multiple access communication channel and anomaly detection in sensor networks

    Part I:

    Get PDF

    Biometric security on body sensor networks

    Get PDF
    corecore