5,302 research outputs found

    Transform coding of pictorial data

    Get PDF
    By using transform coding, image transmission rates as low as 0.5 bit/pel can be achieved. Generally, the bit rate reduction is achieved by allocating fewer bits to low energy high order coefficients, However, to ensure reasonably good picture quality, a large number of bits has to be allocated to high energy dc coefficients for both fine quantization and good channel error immunity, A technique has been developed that, in some cases, allows the de coefficients to be estimated at the receiver, thus eliminating a major source of difficulty with respect to channel errors. [Continues.

    Decoding and constructions of codes in rank and Hamming metric

    Get PDF
    As coding theory plays an important role in data transmission, decoding algorithms for new families of error correction codes are of great interest. This dissertation is dedicated to the decoding algorithms for new families of maximum rank distance (MRD) codes including additive generalized twisted Gabidulin (AGTG) codes and Trombetti-Zhou (TZ) codes, decoding algorithm for Gabidulin codes beyond half the minimum distance and also encoding and decoding algorithms for some new optimal rank metric codes with restrictions. We propose an interpolation-based decoding algorithm to decode AGTG codes where the decoding problem is reduced to the problem of solving a projective polynomial equation of the form q(x) = xqu+1 +bx+a = 0 for a,b ∈ Fqm. We investigate the zeros of q(x) when gcd(u,m)=1 and proposed a deterministic algorithm to solve a linearized polynomial equation which has a close connection to the zeros of q(x). An efficient polynomial-time decoding algorithm is proposed for TZ codes. The interpolation-based decoding approach transforms the decoding problem of TZ codes to the problem of solving a quadratic polynomial equation. Two new communication models are defined and using our models we manage to decode Gabidulin codes beyond half the minimum distance by one unit. Our models also allow us to improve the complexity for decoding GTG and AGTG codes. Besides working on MRD codes, we also work on restricted optimal rank metric codes including symmetric, alternating and Hermitian rank metric codes. Both encoding and decoding algorithms for these optimal families are proposed. In all the decoding algorithms presented in this thesis, the properties of Dickson matrix and the BM algorithm play crucial roles. We also touch two problems in Hamming metric. For the first problem, some cryptographic properties of Welch permutation polynomial are investigated and we use these properties to determine the weight distribution of a binary linear codes with few weights. For the second one, we introduce two new subfamilies for maximum weight spectrum codes with respect to their weight distribution and then we investigate their properties.Doktorgradsavhandlin

    An Exact Quantum Polynomial-Time Algorithm for Simon's Problem

    Get PDF
    We investigate the power of quantum computers when they are required to return an answer that is guaranteed to be correct after a time that is upper-bounded by a polynomial in the worst case. We show that a natural generalization of Simon's problem can be solved in this way, whereas previous algorithms required quantum polynomial time in the expected sense only, without upper bounds on the worst-case running time. This is achieved by generalizing both Simon's and Grover's algorithms and combining them in a novel way. It follows that there is a decision problem that can be solved in exact quantum polynomial time, which would require expected exponential time on any classical bounded-error probabilistic computer if the data is supplied as a black box.Comment: 12 pages, LaTeX2e, no figures. To appear in Proceedings of the Fifth Israeli Symposium on Theory of Computing and Systems (ISTCS'97

    The role of Walsh structure and ordinal linkage in the optimisation of pseudo-Boolean functions under monotonicity invariance.

    Get PDF
    Optimisation heuristics rely on implicit or explicit assumptions about the structure of the black-box fitness function they optimise. A review of the literature shows that understanding of structure and linkage is helpful to the design and analysis of heuristics. The aim of this thesis is to investigate the role that problem structure plays in heuristic optimisation. Many heuristics use ordinal operators; which are those that are invariant under monotonic transformations of the fitness function. In this thesis we develop a classification of pseudo-Boolean functions based on rank-invariance. This approach classifies functions which are monotonic transformations of one another as equivalent, and so partitions an infinite set of functions into a finite set of classes. Reasoning about heuristics composed of ordinal operators is, by construction, invariant over these classes. We perform a complete analysis of 2-bit and 3-bit pseudo-Boolean functions. We use Walsh analysis to define concepts of necessary, unnecessary, and conditionally necessary interactions, and of Walsh families. This helps to make precise some existing ideas in the literature such as benign interactions. Many algorithms are invariant under the classes we define, which allows us to examine the difficulty of pseudo-Boolean functions in terms of function classes. We analyse a range of ordinal selection operators for an EDA. Using a concept of directed ordinal linkage, we define precedence networks and precedence profiles to represent key algorithmic steps and their interdependency in terms of problem structure. The precedence profiles provide a measure of problem difficulty. This corresponds to problem difficulty and algorithmic steps for optimisation. This work develops insight into the relationship between function structure and problem difficulty for optimisation, which may be used to direct the development of novel algorithms. Concepts of structure are also used to construct easy and hard problems for a hill-climber

    Preference Learning

    Get PDF
    This report documents the program and the outcomes of Dagstuhl Seminar 14101 “Preference Learning”. Preferences have recently received considerable attention in disciplines such as machine learning, knowledge discovery, information retrieval, statistics, social choice theory, multiple criteria decision making, decision under risk and uncertainty, operations research, and others. The motivation for this seminar was to showcase recent progress in these different areas with the goal of working towards a common basis of understanding, which should help to facilitate future synergies

    Efficient Certified Resolution Proof Checking

    Get PDF
    We present a novel propositional proof tracing format that eliminates complex processing, thus enabling efficient (formal) proof checking. The benefits of this format are demonstrated by implementing a proof checker in C, which outperforms a state-of-the-art checker by two orders of magnitude. We then formalize the theory underlying propositional proof checking in Coq, and extract a correct-by-construction proof checker for our format from the formalization. An empirical evaluation using 280 unsatisfiable instances from the 2015 and 2016 SAT competitions shows that this certified checker usually performs comparably to a state-of-the-art non-certified proof checker. Using this format, we formally verify the recent 200 TB proof of the Boolean Pythagorean Triples conjecture
    corecore