1,819 research outputs found

    Information Transmission using the Nonlinear Fourier Transform, Part I: Mathematical Tools

    Full text link
    The nonlinear Fourier transform (NFT), a powerful tool in soliton theory and exactly solvable models, is a method for solving integrable partial differential equations governing wave propagation in certain nonlinear media. The NFT decorrelates signal degrees-of-freedom in such models, in much the same way that the Fourier transform does for linear systems. In this three-part series of papers, this observation is exploited for data transmission over integrable channels such as optical fibers, where pulse propagation is governed by the nonlinear Schr\"odinger equation. In this transmission scheme, which can be viewed as a nonlinear analogue of orthogonal frequency-division multiplexing commonly used in linear channels, information is encoded in the nonlinear frequencies and their spectral amplitudes. Unlike most other fiber-optic transmission schemes, this technique deals with both dispersion and nonlinearity directly and unconditionally without the need for dispersion or nonlinearity compensation methods. This first paper explains the mathematical tools that underlie the method.Comment: This version contains minor updates of IEEE Transactions on Information Theory, vol. 60, no. 7, pp. 4312--4328, July 201

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Embedding classical dynamics in a quantum computer

    Full text link
    We develop a framework for simulating measure-preserving, ergodic dynamical systems on a quantum computer. Our approach provides a new operator-theoretic representation of classical dynamics by combining ergodic theory with quantum information science. The resulting quantum embedding of classical dynamics (QECD) enables efficient simulation of spaces of classical observables with exponentially large dimension using a quadratic number of quantum gates. The QECD framework is based on a quantum feature map for representing classical states by density operators on a reproducing kernel Hilbert space, H\mathcal H , and an embedding of classical observables into self-adjoint operators on H\mathcal H. In this scheme, quantum states and observables evolve unitarily under the lifted action of Koopman evolution operators of the classical system. Moreover, by virtue of the reproducing property of H\mathcal H, the quantum system is pointwise-consistent with the underlying classical dynamics. To achieve an exponential quantum computational advantage, we project the state of the quantum system to a density matrix on a 2n2^n-dimensional tensor product Hilbert space associated with nn qubits. By employing discrete Fourier-Walsh transforms, the evolution operator of the finite-dimensional quantum system is factorized into tensor product form, enabling implementation through a quantum circuit of size O(n)O(n). Furthermore, the circuit features a state preparation stage, also of size O(n)O(n), and a quantum Fourier transform stage of size O(n2)O(n^2), which makes predictions of observables possible by measurement in the standard computational basis. We prove theoretical convergence results for these predictions as nn\to\infty. We present simulated quantum circuit experiments in Qiskit Aer, as well as actual experiments on the IBM Quantum System One.Comment: 42 pages, 9 figure

    Decomposition of Gaussian processes, and factorization of positive definite kernels

    Get PDF
    We establish a duality for two factorization questions, one for general positive definite (p.d) kernels KK, and the other for Gaussian processes, say VV. The latter notion, for Gaussian processes is stated via Ito-integration. Our approach to factorization for p.d. kernels is intuitively motivated by matrix factorizations, but in infinite dimensions, subtle measure theoretic issues must be addressed. Consider a given p.d. kernel KK, presented as a covariance kernel for a Gaussian process VV. We then give an explicit duality for these two seemingly different notions of factorization, for p.d. kernel KK, vs for Gaussian process VV. Our result is in the form of an explicit correspondence. It states that the analytic data which determine the variety of factorizations for KK is the exact same as that which yield factorizations for VV. Examples and applications are included: point-processes, sampling schemes, constructive discretization, graph-Laplacians, and boundary-value problems

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
    corecore