94 research outputs found

    Bit flipping decoding for binary product codes

    Get PDF
    Error control coding has been used to mitigate the impact of noise on the wireless channel. Today, wireless communication systems have in their design Forward Error Correction (FEC) techniques to help reduce the amount of retransmitted data. When designing a coding scheme, three challenges need to be addressed, the error correcting capability of the code, the decoding complexity of the code and the delay introduced by the coding scheme. While it is easy to design coding schemes with a large error correcting capability, it is a challenge finding decoding algorithms for these coding schemes. Generally increasing the length of a block code increases its error correcting capability and its decoding complexity. Product codes have been identified as a means to increase the block length of simpler codes, yet keep their decoding complexity low. Bit flipping decoding has been identified as simple to implement decoding algorithm. Research has generally been focused on improving bit flipping decoding for Low Density Parity Check codes. In this study we develop a new decoding algorithm based on syndrome checking and bit flipping to use for binary product codes, to address the major challenge of coding systems, i.e., developing codes with a large error correcting capability yet have a low decoding complexity. Simulated results show that the proposed decoding algorithm outperforms the conventional decoding algorithm proposed by P. Elias in BER and more significantly in WER performance. The algorithm offers comparable complexity to the conventional algorithm in the Rayleigh fading channel

    Partitions of codes

    Get PDF
    In this thesis we look at coding theory wherein we introduce the concept of perspective, a generalisation on the minimum distance of a code, which naturally leads to a partition of the code. Subsequently we introduce focused splittings, which shall be shown to be a generalisation of perfect codes. We investigate the existence of such objects, and address questions such as the complexity of finding a focused splittings, which we show to be NPComplete. We analyse the symmetries of focused splittings. We use focused splittings to address the problem of error correction and we construct an encoding method based on them. Finally we test this construction for various classes of focused splittings

    A study of digital holographic filters generation. Phase 2: Digital data communication system, volume 1

    Get PDF
    An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error

    Flow and heat transfer measurements inside a heated multiple rotating cavity with axial throughflow

    Get PDF
    This thesis discusses experimental results of measurement of heat transfer and velocity flow in a heated multiple cavity test rig with axial throughflow. Of particular interest are the internal cylindrical cavities formed by adjacent discs and the interaction of these with a central axial throughflow of cooling air. Tests were carried out for a range of non-dimensional parameters representative of gas-turbine high pressure compressor internal air system flows (ReΦ up to 5x106 and Rez up to 2x105). One configuration of the test rig was tested in the course of the reported study (Build 3) and test data from a previous rig configuration (Build 2) were processed, analysed and compared with the tested data. The most significant difference between the two builds of test rig was the size of the annular gap between the (non-rotating) shaft and the disc bores. Build 3 had a wider annular gap ratio, dh/b=0.164, while Build 2 featured a gap ratio of dh/b=0.092. Heat transfer data were obtained from thermocouples and a conduction analysis. Heat transfer results show differences between the versions of the rig, with the higher Nusselt number values in Build 3 attributed to the wider annular gap allowing more of the throughflow to penetrate into the cavity compared to Build 2. An attempt is made to correlate the average disc Nusselt numbers and this indicates the existence of different regimes. A two-component Laser Doppler Anemometry system was used on both rigs to measure cavity axial and tangential velocity components. Optical access in Build 3 also allowed for measurement of radial velocities. The axial and radial velocities inside the cavities are virtually zero. The size of the annular gap between disc bore and shaft has a significant effect on the radial distribution of tangential velocity. An analysis of the frequency spectrum obtained from the tangential velocity measurements shows evidence of periodicity in the flow consistent with the current understanding of the flow structure in a heated rotating cavity with axial throughflow

    Some new results on majority-logic codes for correction of random errors

    Get PDF
    The main advantages of random error-correcting majority-logic codes and majority-logic decoding in general are well known and two-fold. Firstly, they offer a partial solution to a classical coding theory problem, that of decoder complexity. Secondly, a majority-logic decoder inherently corrects many more random error patterns than the minimum distance of the code implies is possible. The solution to the decoder complexity is only a partial one because there are circumstances under which a majority-logic decoder is too complex and expensive to implement. [Continues.

    UVUDF: UV Luminosity Functions at the Cosmic High Noon

    Get PDF
    We present the rest-1500 Å UV luminosity functions (LF) for star-forming galaxies during the cosmic high noon—the peak of cosmic star formation rate at 1.5 < z < 3. We use deep NUV imaging data obtained as part of the Hubble Ultra-Violet Ultra Deep Field (UVUDF) program, along with existing deep optical and NIR coverage on the HUDF. We select F225W, F275W, and F336W dropout samples using the Lyman break technique, along with samples in the corresponding redshift ranges selected using photometric redshifts, and measure the rest-frame UV LF at z ~ 1.7, 2.2, 3.0, respectively, using the modified maximum likelihood estimator. We perform simulations to quantify the survey and sample incompleteness for the UVUDF samples to correct the effective volume calculations for the LF. We select galaxies down to M_(UV) = -15.9, -16.3, -16.8 and fit a faint-end slope of α = -1.20^(+0.10)_(-0.13), -1.32^(+0.10)_(-0.14), -1.39^(+0.08)_(-0.12) at 1.4 < z < 1.9, 1.8 < z < 2.6, and 2.4 < z < 3.6, respectively. We compare the star formation properties of z ~ 2 galaxies from these UV observations with results from Hα and UV+IR observations. We find a lack of high-SFR sources in the UV LF compared to the Hα and UV+IR, likely due to dusty SFGs not being properly accounted for by the generic IRX-β relation used to correct for dust. We compute a volume-averaged UV-to-Hα ratio by abundance matching the rest-frame UV LF and Hα LF. We find an increasing UV-to-Hα ratio toward low-mass galaxies (M_∗ ≾ 5 x 10^9 M_⊙). We conclude that this could be due to a larger contribution from starbursting galaxies compared to the high-mass end

    802.11 Payload Iterative decoding between multiple transmission attempts

    Get PDF
    Abstract. The institute of electrical and electronics engineers (IEEE) 802.11 standard specifies widely used technology for wireless local area networks (WLAN). Standard specifies high-performance physical and media access control (MAC) layers for a distributed network but lacks an effective hybrid automatic repeat request (HARQ). Currently, the standard specifies forward error correction (FEC), error detection (ED), and automatic repeat request (ARQ), but in case of decoding errors, the previously transmitted information is not used when decoding the retransmitted packet. This is called Type 1 HARQ. Type 1 HARQ uses received energy inefficiently, but the simple implementation makes it an attractive solution. Unfortunately, research applying more sophisticated HARQ schemes on top of IEEE 802.11 is limited. In this Master’s Thesis, a novel HARQ technology based on packet retransmissions that can be decoded in a turbo-like manner, keeping as much as possible compatibility with vanilla 802.11, is proposed. The proposed technology is simulated with both the IEEE 802.11 code and with the robust, efficient and smart communication in unpredictable environments (RESCUE) code. An additional interleaver is added before the convolutional encoder in the proposed technology, interleaving either the whole frame or only the payload to enable effective iterative decoding. For received frames, turbo-like iterations are done between initially transmitted packet copy and retransmissions. Results are compared against the non-iterative combining method maximizing signal-to-noise ratio (SNR), maximum ratio combining (MRC). The main design goal for this technology is to maintain compatibility with the 802.11 standard while allowing efficient HARQ. Other design goals are range extension, higher throughput, and better performance in terms of bit error rate (BER) and frame error rate (FER). This technology can be used for range extension at low SNR range and may provide up to 4 dB gain at medium SNR range compared to MRC. At high SNR, technology can reduce the penalty from retransmission allowing higher average modulation and coding scheme (MCS). However, these gains come with the cost of computational complexity from the iterative decoding. The main limiting factors of the proposed technology are decoding errors in the header and the scrambler area, and resource-hungry-processing. In simulations, perfect synchronization and packet detection is assumed, but in reality, especially at low SNR, packet detection and synchronization would be challenging. 802.11 pakettien iteratiivinen dekoodaus lähetysten välillä. Tiivistelmä. IEEE 802.11-standardi määrittelee yleisesti käytetyn teknologian langattomille lähiverkoille. Standardissa määritellään tehokas fyysinen- ja verkkoliityntäkerros hajautetuille verkoille, mutta siitä puuttuu tehokas yhdistetty automaattinen uudelleenlähetys. Nykyisellään standardi määrittelee virheenkorjaavan koodin, virheellisen paketin tunnistuksen sekä automaattisen uudelleenlähetyksen, mutta aikaisemmin lähetetyn paketin informaatiota ei käytetä hyväksi uudelleenlähetystilanteessa. Tämä menetelmä tunnetaan tyypin yksi yhdistettynä automaattisena uudelleenlähetyksenä. Tyypin yksi yhdistetty automaattinen uudelleenlähetys käyttää vastaanotettua signaalia tehottomasti, mutta yksinkertaisuus tekee siitä houkuttelevan vaihtoehdon. Valitettavasti edistyneempien uudelleenlähetysvaihtoehtojen tutkimusta 802.11-standardiin on rajoitetusti. Tässä diplomityössä esitellään uusi yhdistetty uudelleenlähetysteknologia, joka pohjautuu pakettien uudelleenlähetykseen, sallien turbo-tyylisen dekoodaamisen säilyttäen mahdollisimman hyvän taaksepäin yhteensopivuutta alkuperäisen 802.11-standardin kanssa. Tämä teknologia on simuloitu käyttäen sekä 802.11- että nk. RESCUE-virheenkorjauskoodia. Teknologiassa uusi lomittaja on lisätty konvoluutio-enkoodaajan eteen, sallien tehokkaan iteratiivisen dekoodaamisen, lomittaen joko koko paketin tai ainoastaan hyötykuorman. Vastaanotetuille paketeille tehdään turbo-tyyppinen iteraatio alkuperäisen vastaanotetun kopion ja uudelleenlähetyksien välillä. Tuloksia vertaillaan eiiteratiiviseen yhdistämismenetelmään, maksimisuhdeyhdistelyyn, joka maksimoi yhdistetyn signaali-kohinasuhteen. Tärkeimpänä suunnittelutavoitteena tässä työssä on tehokas uudelleenlähetysmenetelmä, joka ylläpitää taaksepäin yhteensopivuutta IEEE 802.11-standardin kanssa. Muita tavoitteita ovat kantaman lisäys, nopeampi yhteys ja matalampi bitti- ja pakettivirhesuhde. Kehitettyä teknologiaa voidaan käyttää kantaman lisäykseen matalan signaalikohinasuhteen vallitessa ja se on jopa 4 dB parempi kohtuullisella signaalikohinasuhteella kuin maksimisuhdeyhdistely. Korkealla signaali-kohinasuhteella teknologiaa voidaan käyttää pienentämään häviötä epäonnistuneesta paketinlähetyksestä ja täten sallien korkeamman modulaatio-koodiasteen käyttämisen. Valitettavasti nämä parannukset tulevat kasvaneen laskennallisen monimutkaisuuden kustannuksella, johtuen iteratiivisesta dekoodaamisesta. Isoimmat rajoittavat tekijät teknologian käytössä ovat dekoodausvirheet otsikossa ja datamuokkaimen siemenessä. Tämän lisäksi käyttöä rajoittaa resurssisyöppö prosessointi. Simulaatioissa oletetaan täydellinen synkronisointi, mutta todellisuudessa, erityisesti matalalla signaali-kohinasuhteella, paketin tunnistus ja synkronointi voivat olla haasteellisia
    corecore