63,080 research outputs found

    On the Origin of Polar Coding

    Get PDF
    Polar coding was conceived originally as a technique for boosting the cutoff rate of sequential decoding, along the lines of earlier schemes of Pinsker and Massey. The key idea in boosting the cutoff rate is to take a vector channel (either given or artificially built), split it into multiple correlated subchannels, and employ a separate sequential decoder on each subchannel. Polar coding was originally designed to be a low-complexity recursive channel combining and splitting operation of this type, to be used as the inner code in a concatenated scheme with outer convolutional coding and sequential decoding. However, the polar inner code turned out to be so effective that no outer code was actually needed to achieve the original aim of boosting the cutoff rate to channel capacity. This paper explains the cutoff rate considerations that motivated the development of polar coding. © 2015 IEEE

    The C-value enigma and timing of the Cambrian explosion

    Full text link
    The Cambrian explosion is a grand challenge to science today and involves multidisciplinary study. This event is generally believed as a result of genetic innovations, environmental factors and ecological interactions, even though there are many conflicts on nature and timing of metazoan origins. The crux of the matter is that an entire roadmap of the evolution is missing to discern the biological complexity transition and to evaluate the critical role of the Cambrian explosion in the overall evolutionary context. Here we calculate the time of the Cambrian explosion by an innovative and accurate "C-value clock"; our result (560 million years ago) quite fits the fossil records. We clarify that the intrinsic reason of genome evolution determined the Cambrian explosion. A general formula for evaluating genome size of different species has been found, by which major questions of the C-value enigma can be solved and the genome size evolution can be illustrated. The Cambrian explosion is essentially a major transition of biological complexity, which corresponds to a turning point in genome size evolution. The observed maximum prokaryotic complexity is just a relic of the Cambrian explosion and it is supervised by the maximum information storage capability in the observed universe. Our results open a new prospect of studying metazoan origins and molecular evolution.Comment: 46 pages, 10 figure

    Three phases in the evolution of the standard genetic code: how translation could get started

    Get PDF
    A primordial genetic code is proposed, having only four codons assigned, GGC meaning glycine, GAC meaning aspartate/glutamate, GCC meaning alanine-like and GUC meaning valine-like. Pathways of ambiguity reduction enlarged the codon repertoire with CUC meaning leucine, AUC meaning isoleucine, ACC meaning threonine-like and GAG meaning glutamate. Introduction of UNN anticodons, in a next episode of code evolution in which nonsense elimination was the leading theme, introduced a family box structure superposed on the original mirror structure. Finally, growth rate was the leading theme during the remaining repertoire expansion, explaining the ordered phylogenetic pattern of aminoacyl-tRNA synthetases. The special role of natural aptamers in the process is high-lighted, and the error robustness characteristics of the code are shown to have evolved by way of a stepwise, restricted enlargement of the tRNA repertoire, instead of by an exhaustive selection process testing myriads of codes

    Scaling Exponent and Moderate Deviations Asymptotics of Polar Codes for the AWGN Channel

    Full text link
    This paper investigates polar codes for the additive white Gaussian noise (AWGN) channel. The scaling exponent μ\mu of polar codes for a memoryless channel qYXq_{Y|X} with capacity I(qYX)I(q_{Y|X}) characterizes the closest gap between the capacity and non-asymptotic achievable rates in the following way: For a fixed ε(0,1)\varepsilon \in (0, 1), the gap between the capacity I(qYX)I(q_{Y|X}) and the maximum non-asymptotic rate RnR_n^* achieved by a length-nn polar code with average error probability ε\varepsilon scales as n1/μn^{-1/\mu}, i.e., I(qYX)Rn=Θ(n1/μ)I(q_{Y|X})-R_n^* = \Theta(n^{-1/\mu}). It is well known that the scaling exponent μ\mu for any binary-input memoryless channel (BMC) with I(qYX)(0,1)I(q_{Y|X})\in(0,1) is bounded above by 4.7144.714, which was shown by an explicit construction of polar codes. Our main result shows that 4.7144.714 remains to be a valid upper bound on the scaling exponent for the AWGN channel. Our proof technique involves the following two ideas: (i) The capacity of the AWGN channel can be achieved within a gap of O(n1/μlogn)O(n^{-1/\mu}\sqrt{\log n}) by using an input alphabet consisting of nn constellations and restricting the input distribution to be uniform; (ii) The capacity of a multiple access channel (MAC) with an input alphabet consisting of nn constellations can be achieved within a gap of O(n1/μlogn)O(n^{-1/\mu}\log n) by using a superposition of logn\log n binary-input polar codes. In addition, we investigate the performance of polar codes in the moderate deviations regime where both the gap to capacity and the error probability vanish as nn grows. An explicit construction of polar codes is proposed to obey a certain tradeoff between the gap to capacity and the decay rate of the error probability for the AWGN channel.Comment: 24 page

    Observations of meteoric material and implications for aerosol nucleation in the winter Arctic lower stratosphere derived from in situ particle measurements

    Get PDF
    Number concentrations of total and non-volatile aerosol particles with size diameters >0.01 μm as well as particle size distributions (0.4–23 μm diameter) were measured in situ in the Arctic lower stratosphere (10–20.5 km altitude). The measurements were obtained during the campaigns European Polar Stratospheric Cloud and Lee Wave Experiment (EUPLEX) and Envisat-Arctic-Validation (EAV). The campaigns were based in Kiruna, Sweden, and took place from January to March 2003. Measurements were conducted onboard the Russian high-altitude research aircraft Geophysica using the low-pressure Condensation Nucleus Counter COPAS (COndensation PArticle Counter System) and a modified FSSP 300 (Forward Scattering Spectrometer Probe). Around 18–20 km altitude typical total particle number concentrations nt range at 10–20 cm−3 (ambient conditions). Correlations with the trace gases nitrous oxide (N2O) and trichlorofluoromethane (CFC-11) are discussed. Inside the polar vortex the total number of particles >0.01 μm increases with potential temperature while N2O is decreasing which indicates a source of particles in the above polar stratosphere or mesosphere. A separate channel of the COPAS instrument measures the fraction of aerosol particles non-volatile at 250°C. Inside the polar vortex a much higher fraction of particles contained non-volatile residues than outside the vortex (~67% inside vortex, ~24% outside vortex). This is most likely due to a strongly increased fraction of meteoric material in the particles which is transported downward from the mesosphere inside the polar vortex. The high fraction of non-volatile residual particles gives therefore experimental evidence for downward transport of mesospheric air inside the polar vortex. It is also shown that the fraction of non-volatile residual particles serves directly as a suitable experimental vortex tracer. Nanometer-sized meteoric smoke particles may also serve as nuclei for the condensation of gaseous sulfuric acid and water in the polar vortex and these additional particles may be responsible for the increase in the observed particle concentration at low N2O. The number concentrations of particles >0.4 μm measured with the FSSP decrease markedly inside the polar vortex with increasing potential temperature, also a consequence of subsidence of air from higher altitudes inside the vortex. Another focus of the analysis was put on the particle measurements in the lowermost stratosphere. For the total particle density relatively high number concentrations of several hundred particles per cm3 at altitudes below ~14 km were observed in several flights. To investigate the origin of these high number concentrations we conducted air mass trajectory calculations and compared the particle measurements with other trace gas observations. The high number concentrations of total particles in the lowermost stratosphere are probably caused by transport of originally tropospheric air from lower latitudes and are potentially influenced by recent particle nucleation

    The Jackprot Simulation Couples Mutation Rate with Natural Selection to Illustrate How Protein Evolution Is Not Random

    Get PDF
    Protein evolution is not a random process. Views which attribute randomness to molecular change, deleterious nature to single-gene mutations, insufficient geological time, or population size for molecular improvements to occur, or invoke “design creationism” to account for complexity in molecular structures and biological processes, are unfounded. Scientific evidence suggests that natural selection tinkers with molecular improvements by retaining adaptive peptide sequence. We used slot-machine probabilities and ion channels to show biological directionality on molecular change. Because ion channels reside in the lipid bilayer of cell membranes, their residue location must be in balance with the membrane’s hydrophobic/philic nature; a selective “pore” for ion passage is located within the hydrophobic region. We contrasted the random generation of DNA sequence for KcsA, a bacterial two-transmembrane-domain (2TM) potassium channel, from Streptomyces lividans, with an under-selection scenario, the “jackprot,” which predicted much faster evolution than by chance. We wrote a computer program in JAVA APPLET version 1.0 and designed an online interface, The Jackprot Simulation http://faculty.rwu.edu/cbai/JackprotSimulation.htm, to model a numerical interaction between mutation rate and natural selection during a scenario of polypeptide evolution. Winning the “jackprot,” or highest-fitness complete-peptide sequence, required cumulative smaller “wins” (rewarded by selection) at the first, second, and third positions in each of the 161 KcsA codons (“jackdons” that led to “jackacids” that led to the “jackprot”). The “jackprot” is a didactic tool to demonstrate how mutation rate coupled with natural selection suffices to explain the evolution of specialized proteins, such as the complex six-transmembrane (6TM) domain potassium, sodium, or calcium channels. Ancestral DNA sequences coding for 2TM-like proteins underwent nucleotide “edition” and gene duplications to generate the 6TMs. Ion channels are essential to the physiology of neurons, ganglia, and brains, and were crucial to the evolutionary advent of consciousness. The Jackprot Simulation illustrates in a computer model that evolution is not and cannot be a random process as conceived by design creationists

    ProtocadherinX/Y, a Candidate Gene-Pair for Schizophrenia and Schizoaffective Disorder: A DHPLC Investigation of Gonomic Sequence

    Get PDF
    Protocadherin X and Protocadherin Y (PCDHX and PCDHY) are cell-surface adhesion molecules expressed predominantly in the brain. The PCDHX/Y gene-pair was generated by an X-Y translocation approximately 3 million years ago (MYA) that gave rise to the Homo sapiens-specific region of Xq21.3 and Yp11.2 homology. Genes within this region are expected to code for sexually dimorphic human characteristics, including, for example, cerebral asymmetry a dimension of variation that has been suggested is relevant to psychosis. We examined differences in patients with schizophrenic or schizoaffective psychosis in the genomic sequence of PCDHX and PCDHY in coding and adjacent intronic sequences using denaturing high performance liquid chromatography (DHPLC). Three coding variants were detected in PCDHX and two in PCDHY. However, neither the coding variants nor the intronic polymorphisms could be related to psychosis within families. Low sequence variation suggests selective pressure against sequence change in modern humans in contrast to the structural chromosomal and sequence changes including fixed X-Y differences that occurred in this region earlier in hominid evolution. Our findings exclude sequence variation in PCDHX/Y as relevant to the aetiology of psychosis. However, we note the unusual status of this region with respect to X-inactivation. Further investigation of the epigenetic control of PCDHX/Y in relation to psychosis is warran

    The Alanine World Model for the Development of the Amino Acid Repertoire in Protein Biosynthesis

    Get PDF
    A central question in the evolution of the modern translation machinery is the origin and chemical ethology of the amino acids prescribed by the genetic code. The RNA World hypothesis postulates that templated protein synthesis has emerged in the transition from RNA to the Protein World. The sequence of these events and principles behind the acquisition of amino acids to this process remain elusive. Here we describe a model for this process by following the scheme previously proposed by Hartman and Smith, which suggests gradual expansion of the coding space as GC–GCA–GCAU genetic code. We point out a correlation of this scheme with the hierarchy of the protein folding. The model follows the sequence of steps in the process of the amino acid recruitment and fits well with the co-evolution and coenzyme handle theories. While the starting set (GC-phase) was responsible for the nucleotide biosynthesis processes, in the second phase alanine-based amino acids (GCA-phase) were recruited from the core metabolism, thereby providing a standard secondary structure, the α-helix. In the final phase (GCAU-phase), the amino acids were appended to the already existing architecture, enabling tertiary fold and membrane interactions. The whole scheme indicates strongly that the choice for the alanine core was done at the GCA-phase, while glycine and proline remained rudiments from the GC-phase. We suggest that the Protein World should rather be considered the Alanine World, as it predominantly relies on the alanine as the core chemical scaffold.TU Berlin, Open-Access-Mittel - 201

    Lossy Compression with Privacy Constraints: Optimality of Polar Codes

    Full text link
    A lossy source coding problem with privacy constraint is studied in which two correlated discrete sources XX and YY are compressed into a reconstruction X^\hat{X} with some prescribed distortion DD. In addition, a privacy constraint is specified as the equivocation between the lossy reconstruction X^\hat{X} and YY. This models the situation where a certain amount of source information from one user is provided as utility (given by the fidelity of its reconstruction) to another user or the public, while some other correlated part of the source information YY must be kept private. In this work, we show that polar codes are able, possibly with the aid of time sharing, to achieve any point in the optimal rate-distortion-equivocation region identified by Yamamoto, thus providing a constructive scheme that obtains the optimal tradeoff between utility and privacy in this framework.Comment: Submitted for publicatio
    corecore