40 research outputs found

    Error Propagation Mitigation in Sliding Window Decoding of Braided Convolutional Codes

    Full text link
    We investigate error propagation in sliding window decoding of braided convolutional codes (BCCs). Previous studies of BCCs have focused on iterative decoding thresholds, minimum distance properties, and their bit error rate (BER) performance at small to moderate frame length. Here, we consider a sliding window decoder in the context of large frame length or one that continuously outputs blocks in a streaming fashion. In this case, decoder error propagation, due to the feedback inherent in BCCs, can be a serious problem.In order to mitigate the effects of error propagation, we propose several schemes: a \emph{window extension algorithm} where the decoder window size can be extended adaptively, a resynchronization mechanism where we reset the encoder to the initial state, and a retransmission strategy where erroneously decoded blocks are retransmitted. In addition, we introduce a soft BER stopping rule to reduce computational complexity, and the tradeoff between performance and complexity is examined. Simulation results show that, using the proposed window extension algorithm, resynchronization mechanism, and retransmission strategy, the BER performance of BCCs can be improved by up to four orders of magnitude in the signal-to-noise ratio operating range of interest, and in addition the soft BER stopping rule can be employed to reduce computational complexity.Comment: arXiv admin note: text overlap with arXiv:1801.0323

    Codes Correcting and Simultaneously Detecting Solid Burst Errors

    Get PDF
    Digital images are foremost source for information transfer. Due to advancement of the technology, images are now not treated as reliable source of information. Digital images can be edited according to the need. Adding and deleting content from an image is most easiest and popular way of creating image forgery, which is known as copy-move forgery. Digital Image Forensics is the field that deals with the authenticity of the images. Digital image forensics checks the integrity of the images by detecting various forgeries. In order to hide the traces of copy-move forgery there are editing operations like rotation, scaling, JPEG compression, Gaussian noise called as attacks, which are performed on the copied part of the image before pasting. Till now these attacks are not detected by the single method. The novel approach is proposed to detect image forgery by copy-move under above attacks by combining block-based and keypoint-based method. Keywords: Digital image forensics, copy-move forgery, passive blind approach, keypoint-based, block-based method

    Blockwise Repeated Burst Error Correcting Linear Codes

    Get PDF
    This paper presents a lower and an upper bound on the number of parity check digits required for a linear code that corrects a single sub-block containing errors which are in the form of 2-repeated bursts of length b or less. An illustration of such kind of codes has been provided. Further, the codes that correct m-repeated bursts of length b or less have also been studied

    Coding theory, information theory and cryptology : proceedings of the EIDMA winter meeting, Veldhoven, December 19-21, 1994

    Get PDF

    Coding theory, information theory and cryptology : proceedings of the EIDMA winter meeting, Veldhoven, December 19-21, 1994

    Get PDF

    The analysis of enumerative source codes and their use in Burrows‑Wheeler compression algorithms

    Get PDF
    In the late 20th century the reliable and efficient transmission, reception and storage of information proved to be central to the most successful economies all over the world. The Internet, once a classified project accessible to a selected few, is now part of the everyday lives of a large part of the human population, and as such the efficient storage of information is an important part of the information economy. The improvement of the information storage density of optical and electronic media has been remarkable, but the elimination of redundancy in stored data and the reliable reconstruction of the original data is still a desired goal. The field of source coding is concerned with the compression of redundant data and its reliable decompression. The arithmetic source code, which was independently proposed by J. J. Rissanen and R. Pasco in 1976, revolutionized the field of source coding. Compression algorithms that use an arithmetic code to encode redundant data are typically more effective and computationally more efficient than compression algorithms that use earlier source codes such as extended Huffman codes. The arithmetic source code is also more flexible than earlier source codes, and is frequently used in adaptive compression algorithms. The arithmetic code remains the source code of choice, despite having been introduced more than 30 years ago. The problem of effectively encoding data from sources with known statistics (i.e. where the probability distribution of the source data is known) was solved with the introduction of the arithmetic code. The probability distribution of practical data is seldomly available to the source encoder, however. The source coding of data from sources with unknown statistics is a more challenging problem, and remains an active research topic. Enumerative source codes were introduced by T. J. Lynch and L. D. Davisson in the 1960s. These lossless source codes have the remarkable property that they may be used to effectively encode source sequences from certain sources without requiring any prior knowledge of the source statistics. One drawback of these source codes is the computationally complex nature of their implementations. Several years after the introduction of enumerative source codes, J. G. Cleary and I. H. Witten proved that approximate enumerative source codes may be realized by using an arithmetic code. Approximate enumerative source codes are significantly less complex than the original enumerative source codes, but are less effective than the original codes. Researchers have become more interested in arithmetic source codes than enumerative source codes since the publication of the work by Cleary and Witten. This thesis concerns the original enumerative source codes and their use in Burrows–Wheeler compression algorithms. A novel implementation of the original enumerative source code is proposed. This implementation has a significantly lower computational complexity than the direct implementation of the original enumerative source code. Several novel enumerative source codes are introduced in this thesis. These codes include optimal fixed–to–fixed length source codes with manageable computational complexity. A generalization of the original enumerative source code, which includes more complex data sources, is proposed in this thesis. The generalized source code uses the Burrows–Wheeler transform, which is a low–complexity algorithm for converting the redundancy of sequences from complex data sources to a more accessible form. The generalized source code effectively encodes the transformed sequences using the original enumerative source code. It is demonstrated and proved mathematically that this source code is universal (i.e. the code has an asymptotic normalized average redundancy of zero bits). AFRIKAANS : Die betroubare en doeltreffende versending, ontvangs en berging van inligting vorm teen die einde van die twintigste eeu die kern van die mees suksesvolle ekonomie¨e in die wˆereld. Die Internet, eens op ’n tyd ’n geheime projek en toeganklik vir slegs ’n klein groep verbruikers, is vandag deel van die alledaagse lewe van ’n groot persentasie van die mensdom, en derhalwe is die doeltreffende berging van inligting ’n belangrike deel van die inligtingsekonomie. Die verbetering van die bergingsdigteid van optiese en elektroniese media is merkwaardig, maar die uitwissing van oortolligheid in gebergde data, asook die betroubare herwinning van oorspronklike data, bly ’n doel om na te streef. Bronkodering is gemoeid met die kompressie van oortollige data, asook die betroubare dekompressie van die data. Die rekenkundige bronkode, wat onafhanklik voorgestel is deur J. J. Rissanen en R. Pasco in 1976, het ’n revolusie veroorsaak in die bronkoderingsveld. Kompressiealgoritmes wat rekenkundige bronkodes gebruik vir die kodering van oortollige data is tipies meer doeltreffend en rekenkundig meer effektief as kompressiealgoritmes wat vroe¨ere bronkodes, soos verlengde Huffman kodes, gebruik. Rekenkundige bronkodes, wat gereeld in aanpasbare kompressiealgoritmes gebruik word, is ook meer buigbaar as vroe¨ere bronkodes. Die rekenkundige bronkode bly na 30 jaar steeds die bronkode van eerste keuse. Die probleem om data wat afkomstig is van bronne met bekende statistieke (d.w.s. waar die waarskynlikheidsverspreiding van die brondata bekend is) doeltreffend te enkodeer is opgelos deur die instelling van rekenkundige bronkodes. Die bronenkodeerder het egter selde toegang tot die waarskynlikheidsverspreiding van praktiese data. Die bronkodering van data wat afkomstig is van bronne met onbekende statistieke is ’n groter uitdaging, en bly steeds ’n aktiewe navorsingsveld. T. J. Lynch and L. D. Davisson het tel–bronkodes in die 1960s voorgestel. Tel– bronkodes het die merkwaardige eienskap dat bronsekwensies van sekere bronne effektief met hierdie foutlose kodes ge¨enkodeer kan word, sonder dat die bronenkodeerder enige vooraf kennis omtrent die statistieke van die bron hoef te besit. Een nadeel van tel–bronkodes is die ho¨e rekenkompleksiteit van hul implementasies. J. G. Cleary en I. H. Witten het verskeie jare na die instelling van tel–bronkodes bewys dat benaderde tel–bronkodes gerealiseer kan word deur die gebruik van rekenkundige bronkodes. Benaderde tel–bronkodes het ’n laer rekenkompleksiteit as tel–bronkodes, maar benaderde tel–bronkodes is minder doeltreffend as die oorspronklike tel–bronkodes. Navorsers het sedert die werk van Cleary en Witten meer belangstelling getoon in rekenkundige bronkodes as tel–bronkodes. Hierdie tesis is gemoeid met die oorspronklike tel–bronkodes en die gebruik daarvan in Burrows–Wheeler kompressiealgoritmes. ’n Nuwe implementasie van die oorspronklike tel–bronkode word voorgestel. Die voorgestelde implementasie het ’n beduidende laer rekenkompleksiteit as die direkte implementasie van die oorspronklike tel–bronkode. Verskeie nuwe tel–bronkodes, insluitende optimale vaste–tot–vaste lengte tel–bronkodes met beheerbare rekenkompleksiteit, word voorgestel. ’n Veralgemening van die oorspronklike tel–bronkode, wat meer komplekse databronne insluit as die oorspronklike tel–bronkode, word voorgestel in hierdie tesis. The veralgemeende tel–bronkode maak gebruik van die Burrows–Wheeler omskakeling. Die Burrows–Wheeler omskakeling is ’n lae–kompleksiteit algoritme wat die oortolligheid van bronsekwensies wat afkomstig is van komplekse databronne omskakel na ’n meer toeganklike vorm. Die veralgemeende bronkode enkodeer die omgeskakelde sekwensies effektief deur die oorspronklike tel–bronkode te gebruik. Die universele aard van hierdie bronkode word gedemonstreer en wiskundig bewys (d.w.s. dit word bewys dat die kode ’n asimptotiese genormaliseerde gemiddelde oortolligheid van nul bisse het). CopyrightDissertation (MEng)--University of Pretoria, 2010.Electrical, Electronic and Computer Engineeringunrestricte

    Adaptive Communications for Next Generation Broadband Wireless Access Systems

    Get PDF
    Un dels aspectes claus en el disseny i gestió de les xarxes sense fils d'accés de banda ampla és l'ús eficient dels recursos radio. Des del punt de vista de l'operador, l'ample de banda és un bé escàs i preuat que s´ha d'explotar i gestionar de la forma més eficient possible tot garantint la qualitat del servei que es vol proporcionar. Per altra banda, des del punt de vista del usuari, la qualitat del servei ofert ha de ser comparable al de les xarxes fixes, requerint així un baix retard i una baixa pèrdua de paquets per cadascun dels fluxos de dades entre la xarxa i l'usuari. Durant els darrers anys s´han desenvolupat nombroses tècniques i algoritmes amb l'objectiu d'incrementar l'eficiència espectral. Entre aquestes tècniques destaca l'ús de múltiples antenes al transmissor i al receptor amb l'objectiu de transmetre diferents fluxos de dades simultaneament sense necessitat d'augmentar l'ample de banda. Per altra banda, la optimizació conjunta de la capa d'accés al medi i la capa física (fent ús de l'estat del canal per tal de gestionar de manera optima els recursos) també permet incrementar sensiblement l'eficiència espectral del sistema.L'objectiu d'aquesta tesi és l'estudi i desenvolupament de noves tècniques d'adaptació de l'enllaç i gestió dels recursos ràdio aplicades sobre sistemes d'accés ràdio de propera generació (Beyond 3G). Els estudis realitzats parteixen de la premissa que el transmisor coneix (parcialment) l'estat del canal i que la transmissió es realitza fent servir un esquema multiportadora amb múltiples antenes al transmisor i al receptor. En aquesta tesi es presenten dues línies d'investigació, la primera per casos d'una sola antenna a cada banda de l'enllaç, i la segona en cas de múltiples antenes. En el cas d'una sola antena al transmissor i al receptor, un nou esquema d'assignació de recursos ràdio i priorització dels paquets (scheduling) és proposat i analitzat integrant totes dues funcions sobre una mateixa entitat (cross-layer). L'esquema proposat té com a principal característica la seva baixa complexitat i que permet operar amb transmissions multimedia. Alhora, posteriors millores realitzades per l'autor sobre l'esquema proposat han permès també reduir els requeriments de senyalització i combinar de forma óptima usuaris d'alta i baixa mobilitat sobre el mateix accés ràdio, millorant encara més l'eficiència espectral del sistema. En cas d'enllaços amb múltiples antenes es proposa un nou esquema que combina la selecció del conjunt optim d'antenes transmissores amb la selecció de la codificació espai- (frequència-) temps. Finalment es donen una sèrie de recomanacions per tal de combinar totes dues línies d'investigació, així con un estat de l'art de les tècniques proposades per altres autors que combinen en part la gestió dels recursos ràdio i els esquemes de transmissió amb múltiples antenes.Uno de los aspectos claves en el diseño y gestión de las redes inalámbricas de banda ancha es el uso eficiente de los recursos radio. Desde el punto de vista del operador, el ancho de banda es un bien escaso y valioso que se debe explotar y gestionar de la forma más eficiente posible sin afectar a la calidad del servicio ofrecido. Por otro lado, desde el punto de vista del usuario, la calidad del servicio ha de ser comparable al ofrecido por las redes fijas, requiriendo así un bajo retardo y una baja tasa de perdida de paquetes para cada uno de los flujos de datos entre la red y el usuario. Durante los últimos años el número de técnicas y algoritmos que tratan de incrementar la eficiencia espectral en dichas redes es bastante amplio. Entre estas técnicas destaca el uso de múltiples antenas en el transmisor y en el receptor con el objetivo de poder transmitir simultáneamente diferentes flujos de datos sin necesidad de incrementar el ancho de banda. Por otro lado, la optimización conjunta de la capa de acceso al medio y la capa física (utilizando información de estado del canal para gestionar de manera óptima los recursos) también permite incrementar sensiblemente la eficiencia espectral del sistema.El objetivo de esta tesis es el estudio y desarrollo de nuevas técnicas de adaptación del enlace y la gestión de los recursos radio, y su posterior aplicación sobre los sistemas de acceso radio de próxima generación (Beyond 3G). Los estudios realizados parten de la premisa de que el transmisor conoce (parcialmente) el estado del canal a la vez que se considera que la transmisión se realiza sobre un sistema de transmisión multiportadora con múltiple antenas en el transmisor y el receptor. La tesis se centra sobre dos líneas de investigación, la primera para casos de una única antena en cada lado del enlace, y la segunda en caso de múltiples antenas en cada lado. Para el caso de una única antena en el transmisor y en el receptor, se ha desarrollado un nuevo esquema de asignación de los recursos radio así como de priorización de los paquetes de datos (scheduling) integrando ambas funciones sobre una misma entidad (cross-layer). El esquema propuesto tiene como principal característica su bajo coste computacional a la vez que se puede aplicar en caso de transmisiones multimedia. Posteriores mejoras realizadas por el autor sobre el esquema propuesto han permitido también reducir los requisitos de señalización así como combinar de forma óptima usuarios de alta y baja movilidad. Por otro lado, en caso de enlaces con múltiples antenas en transmisión y recepción, se presenta un nuevo esquema de adaptación en el cual se combina la selección de la(s) antena(s) transmisora(s) con la selección del esquema de codificación espacio-(frecuencia-) tiempo. Para finalizar, se dan una serie de recomendaciones con el objetivo de combinar ambas líneas de investigación, así como un estado del arte de las técnicas propuestas por otros autores que combinan en parte la gestión de los recursos radio y los esquemas de transmisión con múltiples antenas.In Broadband Wireless Access systems the efficient use of the resources is crucial from many points of views. From the operator point of view, the bandwidth is a scarce, valuable, and expensive resource which must be exploited in an efficient manner while the Quality of Service (QoS) provided to the users is guaranteed. On the other hand, a tight delay and link quality constraints are imposed on each data flow hence the user experiences the same quality as in fixed networks. During the last few years many techniques have been developed in order to increase the spectral efficiency and the throughput. Among them, the use of multiple antennas at the transmitter and the receiver (exploiting spatial multiplexing) with the joint optimization of the medium access control layer and the physical layer parameters.In this Ph.D. thesis, different adaptive techniques for B3G multicarrier wireless systems are developed and proposed focusing on the SS-MC-MA and the OFDM(A) (IEEE 802.16a/e/m standards) communication schemes. The research lines emphasize into the adaptation of the transmission having (Partial) knowledge of the Channel State Information for both; single antenna and multiple antenna links. For single antenna links, the implementation of a joint resource allocation and scheduling strategy by including adaptive modulation and coding is investigated. A low complexity resource allocation and scheduling algorithm is proposed with the objective to cope with real- and/or non-real- time requirements and constraints. A special attention is also devoted in reducing the required signalling. However, for multiple antenna links, the performance of a proposed adaptive transmit antenna selection scheme jointly with space-time block coding selection is investigated and compared with conventional structures. In this research line, mainly two optimizations criteria are proposed for spatial link adaptation, one based on the minimum error rate for fixed throughput, and the second focused on the maximisation of the rate for fixed error rate. Finally, some indications are given on how to include the spatial adaptation into the investigated and proposed resource allocation and scheduling process developed for single antenna transmission

    Low-complexity iterative receiver algorithms for multiple-input multiple-output underwater wireless communications

    Get PDF
    This dissertation proposes three low-complexity iterative receiver algorithms for multiple-input multiple-output (MIMO) underwater acoustic (UWA) communications. First is a bidirectional soft-decision feedback Turbo equalizer (Bi-SDFE) which harvests the time-reverse diversity in severe multipath MIMO channels. The Bi-SDFE outperforms the original soft-decision feedback Turbo equalizer (SDFE) while keeping its total computational complexity similar to that of the SDFE. Second, this dissertation proposes an efficient direct adaptation Turbo equalizer for MIMO UWA communications. Benefiting from the usage of soft-decision reference symbols for parameter adaptation as well as the iterative processing inside the adaptive equalizer, the proposed algorithm is efficient in four aspects: robust performance in tough channels, high spectral efficiency with short training overhead, time efficient with fast convergence and low complexity in hardware implementation. Third, a frequency-domain soft-decision block iterative equalizer combined with iterative channel estimation is proposed for the uncoded single carrier MIMO systems with high data efficiency. All the three new algorithms are evaluated by data recorded in real world ocean experiment or pool experiment. Finally, this dissertation also compares several Turbo equalizers in single-input single-output (SISO) UWA channels. Experimental results show that the channel estimation based Turbo equalizers are robust in SISO underwater transmission under harsh channel conditions --Abstract, page iv

    Great expectations: Is there evidence for predictive coding in auditory cortex?

    Get PDF
    Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. Whilst proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modelling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organised - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfil its expectations
    corecore