119 research outputs found

    An Efficient Light-weight LSB steganography with Deep learning Steganalysis

    Full text link
    Active research is going on to securely transmit a secret message or so-called steganography by using data-hiding techniques in digital images. After assessing the state-of-the-art research work, we found, most of the existing solutions are not promising and are ineffective against machine learning-based steganalysis. In this paper, a lightweight steganography scheme is presented through graphical key embedding and obfuscation of data through encryption. By keeping a mindset of industrial applicability, to show the effectiveness of the proposed scheme, we emphasized mainly deep learning-based steganalysis. The proposed steganography algorithm containing two schemes withstands not only statistical pattern recognizers but also machine learning steganalysis through feature extraction using a well-known pre-trained deep learning network Xception. We provided a detailed protocol of the algorithm for different scenarios and implementation details. Furthermore, different performance metrics are also evaluated with statistical and machine learning performance analysis. The results were quite impressive with respect to the state of the arts. We received 2.55% accuracy through statistical steganalysis and machine learning steganalysis gave maximum of 49.93~50% correctly classified instances in good condition.Comment: Accepted pape

    Novel irregular LDPC codes and their application to iterative detection of MIMO systems

    Get PDF
    Low-density parity-check (LDPC) codes are among the best performing error correction codes currently known. For higher performing irregular LDPC codes, degree distributions have been found which produce codes with optimum performance in the infinite block length case. Significant performance degradation is seen at more practical short block lengths. A significant focus in the search for practical LDPC codes is to find a construction method which minimises this reduction in performance as codes approach short lengths. In this work, a novel irregular LDPC code is proposed which makes use of the SPA decoder at the design stage in order to make the best choice of edge placement with respect to iterative decoding performance in the presence of noise. This method, a modification of the progressive edge growth (PEG) algorithm for edge placement in parity-check matrix (PCM) construction is named the DOPEG algorithm. The DOPEG design algorithm is highly flexible in that the decoder optimisation stage may be applied to any modification or extension of the original PEG algorithm with relative ease. To illustrate this fact, the decoder optimisation step was applied to the IPEG modification to the PEG algorithm, which produces codes with comparatively excellent performance. This extension to the DOPEG is called the DOIPEG. A spatially multiplexed coded iteratively detected and decoded multiple-input multiple-output (MIMO) system is then considered. The MIMO system to be investigated is developed through theory and a number of results are presented which illustrate its performance characteristics. The novel DOPEG code is tested for the MIMO system under consideration and a significant performance gain is achieved

    Short-length Low-density Parity-check Codes: Construction and Decoding Algorithms

    Get PDF
    Error control coding is an essential part of modern communications systems. LDPC codes have been demonstrated to offer performance near the fundamental limits of channels corrupted by random noise. Optimal maximum likelihood decoding of LDPC codes is too complex to be practically useful even at short block lengths and so a graph-based message passing decoder known as the belief propagation algorithm is used instead. In fact, on graphs without closed paths known as cycles the iterative message passing decoding is known to be optimal and may converge in a single iteration, although identifying the message update schedule which allows single-iteration convergence is not trivial. At finite block lengths graphs without cycles have poor minimum distance properties and perform poorly even under optimal decoding. LDPC codes with large block length have been demonstrated to offer performance close to that predicted for codes of infinite length, as the cycles present in the graph are quite long. In this thesis, LDPC codes of shorter length are considered as they offer advantages in terms of latency and complexity, at the cost of performance degradation from the increased number of short cycles in the graph. For these shorter LDPC codes, the problems considered are: First, improved construction of structured and unstructured LDPC code graphs of short length with a view to reducing the harmful effects of the cycles on error rate performance, based on knowledge of the decoding process. Structured code graphs are particularly interesting as they allow benefits in encoding and decoding complexity and speed. Secondly, the design and construction of LDPC codes for the block fading channel, a particularly challenging scenario from the point of view of error control code design. Both established and novel classes of codes for the channel are considered. Finally the decoding of LDPC codes by the belief propagation algorithm is considered, in particular the scheduling of messages passed in the iterative decoder. A knowledge-aided approach is developed based on message reliabilities and residuals to allow fast convergence and significant improvements in error rate performance

    Hybrid Advanced Optimization Methods with Evolutionary Computation Techniques in Energy Forecasting

    Get PDF
    More accurate and precise energy demand forecasts are required when energy decisions are made in a competitive environment. Particularly in the Big Data era, forecasting models are always based on a complex function combination, and energy data are always complicated. Examples include seasonality, cyclicity, fluctuation, dynamic nonlinearity, and so on. These forecasting models have resulted in an over-reliance on the use of informal judgment and higher expenses when lacking the ability to determine data characteristics and patterns. The hybridization of optimization methods and superior evolutionary algorithms can provide important improvements via good parameter determinations in the optimization process, which is of great assistance to actions taken by energy decision-makers. This book aimed to attract researchers with an interest in the research areas described above. Specifically, it sought contributions to the development of any hybrid optimization methods (e.g., quadratic programming techniques, chaotic mapping, fuzzy inference theory, quantum computing, etc.) with advanced algorithms (e.g., genetic algorithms, ant colony optimization, particle swarm optimization algorithm, etc.) that have superior capabilities over the traditional optimization approaches to overcome some embedded drawbacks, and the application of these advanced hybrid approaches to significantly improve forecasting accuracy

    Novel LDPC coding and decoding strategies: design, analysis, and algorithms

    Get PDF
    In this digital era, modern communication systems play an essential part in nearly every aspect of life, with examples ranging from mobile networks and satellite communications to Internet and data transfer. Unfortunately, all communication systems in a practical setting are noisy, which indicates that we can either improve the physical characteristics of the channel or find a possible systematical solution, i.e. error control coding. The history of error control coding dates back to 1948 when Claude Shannon published his celebrated work “A Mathematical Theory of Communication”, which built a framework for channel coding, source coding and information theory. For the first time, we saw evidence for the existence of channel codes, which enable reliable communication as long as the information rate of the code does not surpass the so-called channel capacity. Nevertheless, in the following 60 years none of the codes have been proven closely to approach the theoretical bound until the arrival of turbo codes and the renaissance of LDPC codes. As a strong contender of turbo codes, the advantages of LDPC codes include parallel implementation of decoding algorithms and, more crucially, graphical construction of codes. However, there are also some drawbacks to LDPC codes, e.g. significant performance degradation due to the presence of short cycles or very high decoding latency. In this thesis, we will focus on the practical realisation of finite-length LDPC codes and devise algorithms to tackle those issues. Firstly, rate-compatible (RC) LDPC codes with short/moderate block lengths are investigated on the basis of optimising the graphical structure of the tanner graph (TG), in order to achieve a variety of code rates (0.1 < R < 0.9) by only using a single encoder-decoder pair. As is widely recognised in the literature, the presence of short cycles considerably reduces the overall performance of LDPC codes which significantly limits their application in communication systems. To reduce the impact of short cycles effectively for different code rates, algorithms for counting short cycles and a graph-related metric called Extrinsic Message Degree (EMD) are applied with the development of the proposed puncturing and extension techniques. A complete set of simulations are carried out to demonstrate that the proposed RC designs can largely minimise the performance loss caused by puncturing or extension. Secondly, at the decoding end, we study novel decoding strategies which compensate for the negative effect of short cycles by reweighting part of the extrinsic messages exchanged between the nodes of a TG. The proposed reweighted belief propagation (BP) algorithms aim to implement efficient decoding, i.e. accurate signal reconstruction and low decoding latency, for LDPC codes via various design methods. A variable factor appearance probability belief propagation (VFAP-BP) algorithm is proposed along with an improved version called a locally-optimized reweighted (LOW)-BP algorithm, both of which can be employed to enhance decoding performance significantly for regular and irregular LDPC codes. More importantly, the optimisation of reweighting parameters only takes place in an offline stage so that no additional computational complexity is required during the real-time decoding process. Lastly, two iterative detection and decoding (IDD) receivers are presented for multiple-input multiple-output (MIMO) systems operating in a spatial multiplexing configuration. QR decomposition (QRD)-type IDD receivers utilise the proposed multiple-feedback (MF)-QRD or variable-M (VM)-QRD detection algorithm with a standard BP decoding algorithm, while knowledge-aided (KA)-type receivers are equipped with a simple soft parallel interference cancellation (PIC) detector and the proposed reweighted BP decoders. In the uncoded scenario, the proposed MF-QRD and VM-QRD algorithms are shown to approach optimal performance, yet require a reduced computational complexity. In the LDPC-coded scenario, simulation results have illustrated that the proposed QRD-type IDD receivers can offer near-optimal performance after a small number of detection/decoding iterations and the proposed KA-type IDD receivers significantly outperform receivers using alternative decoding algorithms, while requiring similar decoding complexity

    Anonymous Tokens with Public Metadata and Applications to Private Contact Tracing

    Get PDF
    Anonymous single-use tokens have seen recent applications in private Internet browsing and anonymous statistics collection. We develop new schemes in order to include public metadata such as expiration dates for tokens. This inclusion enables planned mass revocation of tokens without distributing new keys, which for natural instantiations can give 77 % and 90 % amortized traffic savings compared to Privacy Pass (Davidson et al., 2018) and DIT: De-Identified Authenticated Telemetry at Scale (Huang et al., 2021), respectively. By transforming the public key, we are able to append public metadata to several existing protocols essentially without increasing computation or communication. Additional contributions include expanded definitions, a more complete framework for anonymous single-use tokens and a description of how anonymous tokens can improve the privacy in dp3t-like digital contact tracing applications. We also extend the protocol to create efficient and conceptually simple tokens with both public and private metadata, and tokens with public metadata and public verifiability from pairings

    Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning

    Full text link
    The paper introduces the application of information geometry to describe the ground states of Ising models by utilizing parity-check matrices of cyclic and quasi-cyclic codes on toric and spherical topologies. The approach establishes a connection between machine learning and error-correcting coding. This proposed approach has implications for the development of new embedding methods based on trapping sets. Statistical physics and number geometry applied for optimize error-correcting codes, leading to these embedding and sparse factorization methods. The paper establishes a direct connection between DNN architecture and error-correcting coding by demonstrating how state-of-the-art architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range arena can be equivalent to of block and convolutional LDPC codes (Cage-graph, Repeat Accumulate). QC codes correspond to certain types of chemical elements, with the carbon element being represented by the mixed automorphism Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix are elaborated upon in detail. The Quantum Approximate Optimization Algorithm (QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous to the back-propagation loss function landscape in training DNNs. This similarity creates a comparable problem with TS pseudo-codeword, resembling the belief propagation method. Additionally, the layer depth in QAOA correlates to the number of decoding belief propagation iterations in the Wiberg decoding tree. Overall, this work has the potential to advance multiple fields, from Information Theory, DNN architecture design (sparse and structured prior graph topology), efficient hardware design for Quantum and Classical DPU/TPU (graph, quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text overlap with arXiv:2109.08184 by other author

    Cognitively-motivated geometric methods of pattern discovery and models of similarity in music

    Get PDF
    This thesis is concerned with cognitively-motivated representations of musical structure. Three problems are addressed, each related in terms of their focus on music as an object of perception, and in the application of geometrical methods of knowledge representation. The problem of pattern discovery in discrete representations of polyphonic music is first considered, and a heuristic proposed which seeks to assist musicological analysis by identifying patterns that may be salient in perception, from a large number of potential patterns. This work is based on geometric principles that are far removed from plausible psychological models of pattern induction, but the method is motivated by psychological evidence for the importance of invariance and repetition in perception. The second and third problems explicitly adopt a cognitive theory of representation, namely the conceptual space framework developed by Gärdenfors (2000). Within this framework, concepts can be represented geometrically within perceptually grounded quality dimensions, and where distance in the space corresponds to similarity. The second problem concerns the prediction of melodic similarity, and the theory of conceptual spaces is investigated in the novel context of point set representations of melodic structure, employing the Earth Mover's Distance metric (Rubner 2000). This work builds on the work of Typke (2007) concerning the application of Earth Mover's Distance to melodic similarity. Evaluation is performed with respect to published psychological data (Müllensiefen 2004), and the MIREX 2005 symbolic melodic similarity evaluation. The third problem concerns the conceptual representation of metrical structure, informed by the psychological theory of metre developed by London (2004). A symbolic formalisation of this theory is developed, alongside two geometrical models of metrical-rhythmic structure, which are evaluated within a genre classification task
    • …
    corecore