189 research outputs found

    Approximation of L\"owdin Orthogonalization to a Spectrally Efficient Orthogonal Overlapping PPM Design for UWB Impulse Radio

    Full text link
    In this paper we consider the design of spectrally efficient time-limited pulses for ultrawideband (UWB) systems using an overlapping pulse position modulation scheme. For this we investigate an orthogonalization method, which was developed in 1950 by Per-Olov L\"owdin. Our objective is to obtain a set of N orthogonal (L\"owdin) pulses, which remain time-limited and spectrally efficient for UWB systems, from a set of N equidistant translates of a time-limited optimal spectral designed UWB pulse. We derive an approximate L\"owdin orthogonalization (ALO) by using circulant approximations for the Gram matrix to obtain a practical filter implementation. We show that the centered ALO and L\"owdin pulses converge pointwise to the same Nyquist pulse as N tends to infinity. The set of translates of the Nyquist pulse forms an orthonormal basis or the shift-invariant space generated by the initial spectral optimal pulse. The ALO transform provides a closed-form approximation of the L\"owdin transform, which can be implemented in an analog fashion without the need of analog to digital conversions. Furthermore, we investigate the interplay between the optimization and the orthogonalization procedure by using methods from the theory of shift-invariant spaces. Finally we develop a connection between our results and wavelet and frame theory.Comment: 33 pages, 11 figures. Accepted for publication 9 Sep 201

    `The frozen accident' as an evolutionary adaptation: A rate distortion theory perspective on the dynamics and symmetries of genetic coding mechanisms

    Get PDF
    We survey some interpretations and related issues concerning the frozen hypothesis due to F. Crick and how it can be explained in terms of several natural mechanisms involving error correction codes, spin glasses, symmetry breaking and the characteristic robustness of genetic networks. The approach to most of these questions involves using elements of Shannon's rate distortion theory incorporating a semantic system which is meaningful for the relevant alphabets and vocabulary implemented in transmission of the genetic code. We apply the fundamental homology between information source uncertainty with the free energy density of a thermodynamical system with respect to transcriptional regulators and the communication channels of sequence/structure in proteins. This leads to the suggestion that the frozen accident may have been a type of evolutionary adaptation

    Geometrically uniform hyperbolic codes

    Get PDF
    In this paper we generalize the concept of geometrically uniform codes, formerly employed in Euclidean spaces, to hyperbolic spaces. We also show a characterization of generalized coset codes through the concept of G-linear codes.17319

    Geometrically uniform hyperbolic codes

    Full text link

    Submicron Systems Architecture Project: Semiannual Technial Report

    Get PDF
    No abstract available

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D

    DIGITAL WATERMARKING FOR COMPACT DISCS AND THEIR EFFECT ON THE ERROR CORRECTION SYSTEM

    Get PDF
    A new technique, based on current compact disc technology, to image the transparent surface of a compact disc, or additionally the reflective information layer, has been designed, implemented and evaluated. This technique (image capture technique) has been tested and successfully applied to the detection of mechanically introduced compact disc watermarks and biometrical information with a resolution of 1.6um x l4um. Software has been written which, when used with the image capture technique, recognises a compact disc based on its error distribution. The software detects digital watermarks which cause either laser signal distortions or decoding error events. Watermarks serve as secure media identifiers. The complete channel coding of a Compact Disc Audio system including EFM modulation, error-correction and interleaving have been implemented in software. The performance of the error correction system of the compact disc has been assessed using this simulation model. An embedded data channel holding watermark data has been investigated. The covert channel is implemented by means of the error-correction ability of the Compact Disc system and was realised by aforementioned techniques like engraving the reflective layer or the polysubstrate layer. Computer simulations show that watermarking schemes, composed of regularly distributed single errors, impose a minimum effect on the error correction system. Error rates increase by a factor of ten if regular single-symbol errors per frame are introduced - all other patterns further increase the overall error rates. Results show that background signal noise has to be reduced by a factor of 60% to account for the additional burden of this optimal watermark pattern. Two decoding strategies, usually employed in modern CD decoders, have been examined. Simulations take emulated bursty background noise as it appears in user-handled discs into account. Variations in output error rates, depending on the decoder and the type of background noise became apparant. At low error rates {r < 0.003) the output symbol error rate for a bursty background differs by 20% depending on the decoder. Differences between a typical burst error distribution caused by user-handling and a non-burst error distribution has been found to be approximately 1% with the higher performing decoder. Simulation results show that the drop of the error-correction rates due to the presence of a watermark pattern quantitatively depends on the characteristic type of the background noise. A four times smaller change to the overall error rate was observed when adding a regular watermark pattern to a characteristic background noise, as caused by user-handling, compared to a non-bursty background

    A linear framework for character skinning

    Get PDF
    Character animation is the process of modelling and rendering a mobile character in a virtual world. It has numerous applications both off-line, such as virtual actors in films, and real-time, such as in games and other virtual environments. There are a number of algorithms for determining the appearance of an animated character, with different trade-offs between quality, ease of control, and computational cost. We introduce a new method, animation space, which provides a good balance between the ease-of-use of very simple schemes and the quality of more complex schemes, together with excellent performance. It can also be integrated into a range of existing computer graphics algorithms. Animation space is described by a simple and elegant linear equation. Apart from making it fast and easy to implement, linearity facilitates mathematical analysis. We derive two metrics on the space of vertices (the “animation space”), which indicate the mean and maximum distances between two points on an animated character. We demonstrate the value of these metrics by applying them to the problems of parametrisation, level-of-detail (LOD) and frustum culling. These metrics provide information about the entire range of poses of an animated character, so they are able to produce better results than considering only a single pose of the character, as is commonly done. In order to compute parametrisations, it is necessary to segment the mesh into charts. We apply an existing algorithm based on greedy merging, but use a metric better suited to the problem than the one suggested by the original authors. To combine the parametrisations with level-of-detail, we require the charts to have straight edges. We explored a heuristic approach to straightening the edges produced by the automatic algorithm, but found that manual segmentation produced better results. Animation space is nevertheless beneficial in flattening the segmented charts; we use least squares conformal maps (LSCM), with the Euclidean distance metric replaced by one of our animation-space metrics. The resulting parametrisations have significantly less overall stretch than those computed based on a single pose. Similarly, we adapt appearance preserving simplification (APS), a progressive mesh-based LOD algorithm, to apply to animated characters by replacing the Euclidean metric with an animation-space metric. When using the memoryless form of APS (in which local rather than global error is considered), the use of animation space for computations reduces the geometric errors introduced by LOD decomposition, compared to simplification based on a single pose. User tests, in which users compared video clips of the two, demonstrated a statistically significant preference for the animation-space simplifications, indicating that the visual quality is better as well. While other methods exist to take multiple poses into account, they are based on a sampling of the pose space, and the computational cost scales with the number of samples used. In contrast, our method is analytic and uses samples only to gather statistics. The quality of LOD approximations by improved further by introducing a novel approach to LOD, influence simplification, in which we remove the influences of bones on vertices, and adjust the remaining influences to approximate the original vertex as closely as possible. Once again, we use an animation-space metric to determine the approximation error. By combining influence simplification with the progressive mesh structure, we can obtain further improvements in quality: for some models and at some detail levels, the error is reduced by an order of magnitude relative to a pure progressive mesh. User tests showed that for some models this significantly improves quality, while for others it makes no significant difference. Animation space is a generalisation of skeletal subspace deformation (SSD), a popular method for real-time character animation. This means that there is a large existing base of models that can immediately benefit from the modified algorithms mentioned above. Furthermore, animation space almost entirely eliminates the well-known shortcomings of SSD (the so-called “candy-wrapper” and “collapsing elbow” effects). We show that given a set of sample poses, we can fit an animation-space model to these poses by solving a linear least-squares problem. Finally, we demonstrate that animation space is suitable for real-time rendering, by implementing it, along with level-of-detail rendering, on a PC with a commodity video card. We show that although the extra degrees of freedom make the straightforward approach infeasible for complex models, it is still possible to obtain high performance; in fact, animation space requires fewer basic operations to transform a vertex position than SSD. We also consider two methods of lighting LOD-simplified models using the original normals: tangent-space normal maps, an existing method that is fast to render but does not capture dynamic structures such as wrinkles; and tangent maps, a novel approach that encodes animation-space tangent vectors into textures, and which captures dynamic structures. We compare the methods both for performance and quality, and find that tangent-space normal maps are at least an order of magnitude faster, while user tests failed to show any perceived difference in quality between them
    corecore