5,519 research outputs found

    Near-capacity dirty-paper code design : a source-channel coding approach

    Get PDF
    This paper examines near-capacity dirty-paper code designs based on source-channel coding. We first point out that the performance loss in signal-to-noise ratio (SNR) in our code designs can be broken into the sum of the packing loss from channel coding and a modulo loss, which is a function of the granular loss from source coding and the target dirty-paper coding rate (or SNR). We then examine practical designs by combining trellis-coded quantization (TCQ) with both systematic and nonsystematic irregular repeat-accumulate (IRA) codes. Like previous approaches, we exploit the extrinsic information transfer (EXIT) chart technique for capacity-approaching IRA code design; but unlike previous approaches, we emphasize the role of strong source coding to achieve as much granular gain as possible using TCQ. Instead of systematic doping, we employ two relatively shifted TCQ codebooks, where the shift is optimized (via tuning the EXIT charts) to facilitate the IRA code design. Our designs synergistically combine TCQ with IRA codes so that they work together as well as they do individually. By bringing together TCQ (the best quantizer from the source coding community) and EXIT chart-based IRA code designs (the best from the channel coding community), we are able to approach the theoretical limit of dirty-paper coding. For example, at 0.25 bit per symbol (b/s), our best code design (with 2048-state TCQ) performs only 0.630 dB away from the Shannon capacity

    Nested turbo codes for the costa problem

    Get PDF
    Driven by applications in data-hiding, MIMO broadcast channel coding, precoding for interference cancellation, and transmitter cooperation in wireless networks, Costa coding has lately become a very active research area. In this paper, we first offer code design guidelines in terms of source- channel coding for algebraic binning. We then address practical code design based on nested lattice codes and propose nested turbo codes using turbo-like trellis-coded quantization (TCQ) for source coding and turbo trellis-coded modulation (TTCM) for channel coding. Compared to TCQ, turbo-like TCQ offers structural similarity between the source and channel coding components, leading to more efficient nesting with TTCM and better source coding performance. Due to the difference in effective dimensionality between turbo-like TCQ and TTCM, there is a performance tradeoff between these two components when they are nested together, meaning that the performance of turbo-like TCQ worsens as the TTCM code becomes stronger and vice versa. Optimization of this performance tradeoff leads to our code design that outperforms existing TCQ/TCM and TCQ/TTCM constructions and exhibits a gap of 0.94, 1.42 and 2.65 dB to the Costa capacity at 2.0, 1.0, and 0.5 bits/sample, respectively

    Digital watermark technology in security applications

    Get PDF
    With the rising emphasis on security and the number of fraud related crimes around the world, authorities are looking for new technologies to tighten security of identity. Among many modern electronic technologies, digital watermarking has unique advantages to enhance the document authenticity. At the current status of the development, digital watermarking technologies are not as matured as other competing technologies to support identity authentication systems. This work presents improvements in performance of two classes of digital watermarking techniques and investigates the issue of watermark synchronisation. Optimal performance can be obtained if the spreading sequences are designed to be orthogonal to the cover vector. In this thesis, two classes of orthogonalisation methods that generate binary sequences quasi-orthogonal to the cover vector are presented. One method, namely "Sorting and Cancelling" generates sequences that have a high level of orthogonality to the cover vector. The Hadamard Matrix based orthogonalisation method, namely "Hadamard Matrix Search" is able to realise overlapped embedding, thus the watermarking capacity and image fidelity can be improved compared to using short watermark sequences. The results are compared with traditional pseudo-randomly generated binary sequences. The advantages of both classes of orthogonalisation inethods are significant. Another watermarking method that is introduced in the thesis is based on writing-on-dirty-paper theory. The method is presented with biorthogonal codes that have the best robustness. The advantage and trade-offs of using biorthogonal codes with this watermark coding methods are analysed comprehensively. The comparisons between orthogonal and non-orthogonal codes that are used in this watermarking method are also made. It is found that fidelity and robustness are contradictory and it is not possible to optimise them simultaneously. Comparisons are also made between all proposed methods. The comparisons are focused on three major performance criteria, fidelity, capacity and robustness. aom two different viewpoints, conclusions are not the same. For fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal codes has very strong advantage to preserve image fidelity and the advantage of capacity performance is also significant. However, from the power ratio point of view, the orthogonalisation methods demonstrate significant advantage on capacity and robustness. The conclusions are contradictory but together, they summarise the performance generated by different design considerations. The synchronisation of watermark is firstly provided by high contrast frames around the watermarked image. The edge detection filters are used to detect the high contrast borders of the captured image. By scanning the pixels from the border to the centre, the locations of detected edges are stored. The optimal linear regression algorithm is used to estimate the watermarked image frames. Estimation of the regression function provides rotation angle as the slope of the rotated frames. The scaling is corrected by re-sampling the upright image to the original size. A theoretically studied method that is able to synchronise captured image to sub-pixel level accuracy is also presented. By using invariant transforms and the "symmetric phase only matched filter" the captured image can be corrected accurately to original geometric size. The method uses repeating watermarks to form an array in the spatial domain of the watermarked image and the the array that the locations of its elements can reveal information of rotation, translation and scaling with two filtering processes

    Estimating Watermarking Capacity in Gray Scale Images Based on Image Complexity

    Get PDF
    Capacity is one of the most important parameters in image watermarking. Different works have been done on this subject with different assumptions on image and communication channel. However, there is not a global agreement to estimate watermarking capacity. In this paper, we suggest a method to find the capacity of images based on their complexities. We propose a new method to estimate image complexity based on the concept of Region Of Interest (ROI). Our experiments on 2000 images showed that the proposed measure has the best adoption with watermarking capacity in comparison with other complexity measures. In addition, we propose a new method to calculate capacity using proposed image complexity measure. Our proposed capacity estimation method shows better robustness and image quality in comparison with recent works in this field

    Source-channel coding for robust image transmission and for dirty-paper coding

    Get PDF
    In this dissertation, we studied two seemingly uncorrelated, but conceptually related problems in terms of source-channel coding: 1) wireless image transmission and 2) Costa ("dirty-paper") code design. In the first part of the dissertation, we consider progressive image transmission over a wireless system employing space-time coded OFDM. The space-time coded OFDM system based on a newly built broadband MIMO fading model is theoretically evaluated by assuming perfect channel state information (CSI) at the receiver for coherent detection. Then an adaptive modulation scheme is proposed to pick the constellation size that offers the best reconstructed image quality for each average signal-to-noise ratio (SNR). A more practical scenario is also considered without the assumption of perfect CSI. We employ low-complexity decision-feedback decoding for differentially space- time coded OFDM systems to exploit transmitter diversity. For JSCC, we adopt a product channel code structure that is proven to provide powerful error protection and bursty error correction. To further improve the system performance, we also apply the powerful iterative (turbo) coding techniques and propose the iterative decoding of differentially space-time coded multiple descriptions of images. The second part of the dissertation deals with practical dirty-paper code designs. We first invoke an information-theoretical interpretation of algebraic binning and motivate the code design guidelines in terms of source-channel coding. Then two dirty-paper code designs are proposed. The first is a nested turbo construction based on soft-output trellis-coded quantization (SOTCQ) for source coding and turbo trellis- coded modulation (TTCM) for channel coding. A novel procedure is devised to balance the dimensionalities of the equivalent lattice codes corresponding to SOTCQ and TTCM. The second dirty-paper code design employs TCQ and IRA codes for near-capacity performance. This is done by synergistically combining TCQ with IRA codes so that they work together as well as they do individually. Our TCQ/IRA design approaches the dirty-paper capacity limit at the low rate regime (e.g., < 1:0 bit/sample), while our nested SOTCQ/TTCM scheme provides the best performs so far at medium-to-high rates (e.g., >= 1:0 bit/sample). Thus the two proposed practical code designs are complementary to each other

    Coding with side information

    Get PDF
    Source coding and channel coding are two important problems in communications. Although side information exists in everyday scenario, the e&#64256;ect of side information is not taken into account in the conventional setups. In this thesis, we focus on the practical designs of two interesting coding problems with side information: Wyner-Ziv coding (source coding with side information at the decoder) and Gel??fand-Pinsker coding (channel coding with side information at the encoder). For WZC, we split the design problem into the two cases when the distortion of the reconstructed source is zero and when it is not. We review that the &#64257;rst case, which is commonly called Slepian-Wolf coding (SWC), can be implemented using conventional channel coding. Then, we detail the SWC design using the low-density parity-check (LDPC) code. To facilitate SWC design, we justify a necessary requirement that the SWC performance should be independent of the input source. We show that a su&#64259;cient condition of this requirement is that the hypothetical channel between the source and the side information satis&#64257;es a symmetry condition dubbed dual symmetry. Furthermore, under that dual symmetry condition, SWC design problem can be simply treated as LDPC coding design over the hypothetical channel. When the distortion of the reconstructed source is non-zero, we propose a practical WZC paradigm called Slepian-Wolf coded quantization (SWCQ) by combining SWC and nested lattice quantization. We point out an interesting analogy between SWCQ and entropy coded quantization in classic source coding. Furthermore, a practical scheme of SWCQ using 1-D nested lattice quantization and LDPC is implemented. For GPC, since the actual design procedure relies on the more precise setting of the problem, we choose to investigate the design of GPC as the form of a digital watermarking problem as digital watermarking is the precise dual of WZC. We then introduce an enhanced version of the well-known spread spectrum watermarking technique. Two applications related to digital watermarking are presented

    An application of wiretap channels with side information

    Get PDF
    Information hiding erfordert nicht nur die Geheimhaltung einer Information, sonderern auch der Kommunikation. Im Allgemeinen kann die Information durch kryptohgraphische Verfahren geschützt werden, allerdings kann gerade die Übertragung verschlüsselter Daten die Aufmerksamkeit eines Angreifers wecken und erst so zu einer kryprographischen Attacke führen. Eine geheime Kommunikation wird in der Praxis meist realisiert, indem die Information in einem Medium wie einem Bild oder einer Publikation versteckt wird. In dieser Dissertation wird das Problem des Information-Hiding aus dem Blickwinkel der Informationstheorie untersucht. Dazu wird der Ansatz der Code-Partitionierung verfolgt, der bereits erfolgreich zur Lösung vieler klassischer Probleme wie der Bestimmung von erreichbaren Gebieten für den Broadcast-Kanal, den Abhör-Kanal und anderer Kanäle mit Zusatzinformation eingesetzt wurde. In dieser Arbeit wird die Code-Partitionierung zur Ableitung erreichbare Gebiete auf zwei neue Kanalmodelle angewendet: den Gauss-Abhör Kanal mit Zusatzinformation und dem Abhör-Kanal II mit Zusatzinformation. Im ersten Fall wird untersucht, wie die Zusatzinformation beim Sender für eine effiziente Codierung genutzt werden kann. Für das zweite Modell wird untersucht, wie sich einem Angreifer bekannte Zusatzinformationen über die geheimen Daten auf den Systementwurf auswirken. Für den Gauss-Abhör Kanal werden sog. Leck-Funktionen definiert, die den Kanal bzw. die Art der Zusatzinformation genauer charakterisieren. Für Kanäle mit unterschiedlicher Charakteristik werden drei Betriebsmodi vorgeschlagen, unter denen sich eine verdeckte Kommunikation mit asymptotisch perfekter Sicherheit betreiben lässt. Basierend auf diesen Modi werden erreichbare Gebiete für den Gauss-Abhör Kanal mit Zusatzinformation abgeleitet. Der Gauss-Abhör Kanal mit Zusatzinformation findet eine direkte Anwendung beim Information-Hiding in einem Bild. Es kann gezeigt werden, dass eine Schlüssel-basierte Implementierung des Modells äquivalent zu einer einfachen Erweiterung des bekannten Verfahrens für digitale Wasserzeichen nach der Quantisierungsmethode ist. Eine vielversprechende experimentelle Studie zeigt, dass sich in einem Graustufenbild bei hoher Sicherheit und nur geringer Degradation der Bildqualität eine signifikante Menge von Binärdaten verstecken lässt (1 bit pro pixel). Zur Analyse des Abhör-Kanal II mit Zusatzinformation wird das erweiterte Konzept des Inversen-Relativen-Dimension/Längen Profils eingesetzt. Es wird untersucht, wie sich Zusatzinformation beim Angreifer über die geheimen Daten auf den Systementwurf auswirken. Dazu werden charakteristika guter Binärcodes für diesen Kanal abgeleitet. Darüberhinaus definieren wir für den Abhör-Kanal II mit und ohne Zusatzinformtaion eine Klasse von Codes finiter Länge, die bei einer Beschränkung der erlaubten Datenrate die grösst mögliche Robustheit gegen eine Attacke bietet. Basierend auf der Codierungsstrategie leiten wir dann für den Kanal mit Zusatzinformation erreichbare Gebiete ab

    Dirty paper coding schemes for MIMO broadcast channels

    No full text
    In this paper we propose several dirty paper coding schemes for the broadcast channel when both the transmitter and receivers are equipped with multiple antennas. These coding schemes are based on channel state information at the transmitter and inspired from information-theoretic concepts. The proposed end-to-end algorithms allows us to evaluate the performance of the broadcast channel in terms of bit error rates and not in terms of coding rates as it is usually the case in the corresponding literature. Different inner coding schemes such as the ZF-DPC and MMSE-DPC and different outer coding schemes such as the THS, SCS and TCQ are compared and discussed. We also consider the DPC idea as a way of implementing a multiple access scheme. In this respect it is compared with the well-know TDMA scheme. Sometimes our conclusions show quite surprising results in comparison to what is expected by pure information-theoretic considerations
    corecore