126 research outputs found

    Fast and accurate algorithm for the computation of complex linear canonical transforms

    Get PDF
    A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in ∌N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant. © 2010 Optical Society of America

    Influence of Gauss and Gauss-Lobatto quadrature rules on the accuracy of a quadrilateral finite element method in the time domain.

    Get PDF
    International audienceIn this paper, we examine the infl uence of numerical integration on finite element methods using quadrilateral or hexahedral meshes in the time domain. We pay special attention to the use of Gauss-Lobatto points to perform mass lumping for any element order. We provide some theoretical results through several error estimates that are completed by various numerical experiments

    Analysis and Error Performances of Convolutional Doubly Orthogonal Codes with Non-Binary Alphabets

    Get PDF
    RÉSUMÉ RĂ©cemment, les codes convolutionnels simple-orthogonaux de Massey ont Ă©tĂ© adaptĂ©s au dĂ©codage efficace moderne. Plus spĂ©cifiquement, les caractĂ©ristiques et propriĂ©tĂ©s d'simple-orthogonalitĂ© de ce groupe de codes ont Ă©tĂ© Ă©tendues aux conditions de double-orthogonalitĂ© afin d'accommoder les algorithmes de dĂ©codage itĂ©ratif modernes, donnant lieu aux codes convolutionnels doublement orthogonaux notĂ©s codes CDOs. Ainsi À l'Ă©cart de l'algorithme de propagation de croyance (Belief Propagation, BP), le dĂ©codage itĂ©ratif Ă  seuil, dĂ©veloppĂ© Ă  partir de l'algorithme de dĂ©codage Ă  seuil de Massey, peut aussi ĂȘtre appliquĂ© aux codes CDOs. Cet algorithme est particuliĂšrement attrayant car il offre une complexitĂ© moins Ă©levĂ©e que celle de l'algorithme de dĂ©codage Ă  propagation de croyance (en anglais Belief Propagation, notĂ© BP). Les codes convolutionnels doublement orthogonaux peuvent ĂȘtre divisĂ©s en deux groupes: les codes CDOs non-rĂ©cursifs utilisant des structures d’encodage Ă  un seul registre Ă  dĂ©calage, et les codes CDOs rĂ©cursifs (en anglais Recursive CDO, notĂ©s RCDO) construits Ă  partir de proto-graphes. À des rapports signal-Ă -bruit Eb/N0 modĂ©rĂ©s, les codes non-rĂ©cursifs CDO prĂ©sentent des performances d’erreurs comparables Ă  celles des autres technique courantes lorsqu’ils sont utilisĂ©s avec l'algorithme de dĂ©codage Ă  seuil, prĂ©sentant ainsi une alternative attrayante aux codes de contrĂŽle de paritĂ© Ă  faible densitĂ© (en Anglais Low-Density Parity-Check codes, notĂ©s LDPC). Par contre, les codes CDOs rĂ©cursifs RCDO fournissent des performances d'erreur trĂšs Ă©levĂ©es en utilisant le dĂ©codage BP, se rapprochent de la limite de Shannon. De plus, dans l'Ă©tude des codes LDPC, l'exploitation des corps finis GF(q) avec q>2 comme alphabets du code contribue Ă  l'amĂ©lioration des performances avec l'algorithme de dĂ©codage BP. Ces derniers sont appelĂ©s alors les codes LDPC q-aires. InspirĂ© du succĂšs de l'application des alphabets dans un corps de Galois de q Ă©lĂ©ments GF(q), dans les codes LDPC, nous portons dans cette thĂšse, notre attention aux codes CDO utilisant les corps GF(q) finis, appelĂ©s CDO q-aires. Les codes CDO rĂ©cursifs et non-rĂ©cursifs binaires sont ainsi Ă©tendus Ă  l'utilisation des corps finis GF(q) avec q>2. Leurs performances d’erreur ont Ă©tĂ© dĂ©terminĂ©es par simulation Ă  l’ordinateur en utilisant les deux algorithmes de dĂ©codage itĂ©ratif : Ă  seuil et BP. Bien que l'algorithme de dĂ©codage Ă  seuil souffre d'une perte de performance par rapport Ă  l'algorithme BP, sa complexitĂ© de dĂ©codage est substantiellement rĂ©duite grĂące Ă  la rapide convergence au message estimĂ©. On montre que les codes CDO q-aires fournissent des performances d'erreur supĂ©rieures Ă  celles des codes binaires aussi bien dans le dĂ©codage itĂ©ratif Ă  seuil et dans le dĂ©codage BP. Cette supĂ©rioritĂ© en termes de taux d'erreur est plus prononcĂ©e Ă  haut rapport signal-Ă -bruit Eb/N0. Cependant ces avantages sont obtenus au prix d'une complexitĂ© plus Ă©levĂ©e, complexitĂ© Ă©valuĂ©e par le nombre des diffĂ©rentes opĂ©rations requises dans le processus de dĂ©codage. Afin de faciliter l'implĂ©mentation des codes CDO q-aires, nous avons examinĂ© l'effet des alphabets quantifiĂ©s dans la procĂ©dure de dĂ©codage sur les performances d'erreur. Il a Ă©tĂ© dĂ©montrĂ© que le processus de dĂ©codage nĂ©cessite une quantification plus fine que dans le cas des codes binaires.----------ABSTRACT Recently, the self orthogonal codes due to Massey were adapted in the realm of modern decoding techniques. Specifically, the self orthogonal characteristics of this set of codes are expanded to the doubly orthogonal conditions in order to accommodate the iterative decoding algorithms, giving birth to the convolutional doubly orthogonal (CDO) codes. In addition to the belief propagation (BP) algorithm, the CDO codes also lend themselves to the iterative threshold decoding, which has been developed from the threshold decoding algorithm raised by Massey, offering a lower-complexity alternative for the BP decoding algorithm. The convolutional doubly orthogonal codes are categorized into two subgroups: the non-recursive CDO codes featured by the shift-register structures without feedback, while the recursive CDO (RCDO) codes are constructed based on shift registers with feedback connections from the outputs. The non-recursive CDO codes demonstrate competitive error performances under the iterative threshold decoding algorithm in moderate Eb/N0 region, providing another set of low-density parity-check convolutional (LDPCC) codes with outstanding error performances. On the other hand, the recursive CDO codes enjoy exceptional error performances under BP decoding, enjoying waterfall performances close to the Shannon limit. Additionally, in the study of the LDPC codes, the exploration of the finite fields GF(q) with q>2 as the code alphabets had proved to improve the error performances of the codes under the BP algorithm, giving rise to the q-ary LDPC codes. Inspired by the success of the application of GF(q) alphabets upon the LDPC codes, we focus our attention on the CDO codes with their alphabets generalized with the finite fields; particularly, we investigated the effects of this generalization on the error performances of the CDO codes and investigated their underlying causes. In this thesis, both the recursive and non-recursive CDO codes are extended with the finite fields GF(q) with q>2, referred to as q-ary CDO codes. Their error performances are examined through simulations using both the iterative threshold decoding and the BP decoding algorithms. Whilst the threshold decoding algorithm suffers some performance loss as opposed to the BP algorithm, it phenomenally reduces the complexity in the decoding process mainly due to the fast convergence of the messages. The q-ary CDO codes demonstrated superior error performances as compared to their binary counterparts under both the iterative threshold decoding and the BP decoding algorithms, which is most pronounced in high Eb/N0 region; however, these improvements have been accompanied by an increase in the decoding complexity, which is evaluated through the number of different operations needed in the decoding process. In order to facilitate the implementation of the q-ary CDO codes, we examined the effect of quantized message alphabets in the decoding process on the error performances of the codes

    Partial Key Exposure in Ring-LWE-Based Cryptosystems: Attacks and Resilience

    Get PDF
    We initiate the study of partial key exposure in ring-LWE-based cryptosystems. Specifically, we - Introduce the search and decision Leaky-RLWE assumptions (Leaky-SRLWE, Leaky-DRLWE), to formalize the hardness of search/decision RLWE under leakage of some fraction of coordinates of the NTT transform of the RLWE secret and/or error. - Present and implement an efficient key exposure attack that, given certain 1/41/4-fraction of the coordinates of the NTT transform of the RLWE secret, along with RLWE instances, recovers the full RLWE secret for standard parameter settings. - Present a search-to-decision reduction for Leaky-RLWE for certain types of key exposure. - Analyze the security of NewHope key exchange under partial key exposure of 1/81/8-fraction of the secrets and error. We show that, assuming that Leaky-DRLWE is hard for these parameters, the shared key vv (which is then hashed using a random oracle) is computationally indistinguishable from a random variable with average min-entropy 238238, conditioned on transcript and leakage, whereas without leakage the min-entropy is 256256

    Extending Velocity Channel Analysis for Studying Turbulence Anisotropies

    Full text link
    We extend the velocity channel analysis (VCA), introduced by Lazarian & Pogosyan, of the intensity fluctuations in the velocity slices of position-position-velocity (PPV) spectroscopic data from Doppler broadened lines to study statistical anisotropy of the underlying velocity and density that arises in a turbulent medium from the presence of magnetic field. In particular, we study analytically how the anisotropy of the intensity correlation in the channel maps changes with the thickness of velocity channels. In agreement with the earlier VCA studies we find that the anisotropy in the thick channels reflects the anisotropy of the density field, while the relative contribution of density and velocity fluctuations to the thin velocity channels depends on the density spectral slope. We show that the anisotropies arising from Alfven, slow and fast magnetohydrodynamical modes are different, in particular, the anisotropy in PPV created by fast modes is opposite to that created by Alfven and slow modes, and this can be used to separate their contributions. We successfully compare our results with the recent numerical study of the PPV anisotropies measured with synthetic observations. We also extend our study to the medium with self-absorption as well as to the case of absorption lines. In addition, we demonstrate how the studies of anisotropy can be performed using interferometers.Comment: 36 pages, 16 figures, Accepted to MNRAS, minor changes to match the accepted versio

    Field theories for stochastic processes

    Get PDF
    This thesis is a collection of collaborative research work which uses field-theoretic techniques to approach three different areas of stochastic dynamics: Branching Processes, First-passage times of processes with are subject to both white and coloured noise, and numerical and analytical aspects of first-passage times in fractional Brownian Motion. Chapter 1 (joint work with Rosalba Garcia Millan, Johannes Pausch, and Gunnar Pruessner, appeared in Phys. Rev. E 98 (6):062107) contains an analysis of non-spatial branching processes with arbitrary offspring distribution. Here our focus lies on the statistics of the number of particles in the system at any given time. We calculate a host of observables using Doi-Peliti field theory and find that close to criticality these observables no longer depend on the details of the offspring distribution, and are thus universal. In Chapter 2 (joint work with Ignacio Bordeu, Saoirse Amarteifio, Rosalba Garcia Millan, Nanxin Wei, and Gunnar Pruessner, appeared in Sci. Rep. 9:15590) we study the number of sites visited by a branching random walk on general graphs. To do so, we introduce a fieldtheoretic tracing mechanism which keeps track of all already visited sites. We find the scaling laws of the moments of the distribution near the critical point. Chapter 3 (joint work with Gunnar Pruessner and Guillaume Salbreux, submitted, arXiv: 2006.00116) provides an analysis of the first-passage time problem for stochastic processes subject to white and coloured noise. By way of a perturbation theory, I give a systematic and controlled expansion of the moment generating function of first-passage times. In Chapter 4, we revise the tracing mechanism found earlier and use it to characterise three different extreme values, first-passage times, running maxima, and mean volume explored. By formulating these in field-theoretic language, we are able to derive new results for a class of non-Markovian stochastic processes. Chapter 5 and 6 are concerned with the first-passage time distribution of fractional Brownian Motion. Chapter 5 (joint work with Kay Wiese, appeared in Phys. Rev. E 101 (4):043312) introduces a new algorithm to sample them efficiently. Chapter 6 (joint work with Maxence Arutkin and Kay Wiese, submitted, arXiv:1908.10801) gives a field-theoretically obtained perturbative result of the first-passage time distribution in the presence of linear and non-linear drift.Open Acces
    • 

    corecore