1,819 research outputs found
Information Transmission using the Nonlinear Fourier Transform, Part I: Mathematical Tools
The nonlinear Fourier transform (NFT), a powerful tool in soliton theory and
exactly solvable models, is a method for solving integrable partial
differential equations governing wave propagation in certain nonlinear media.
The NFT decorrelates signal degrees-of-freedom in such models, in much the same
way that the Fourier transform does for linear systems. In this three-part
series of papers, this observation is exploited for data transmission over
integrable channels such as optical fibers, where pulse propagation is governed
by the nonlinear Schr\"odinger equation. In this transmission scheme, which can
be viewed as a nonlinear analogue of orthogonal frequency-division multiplexing
commonly used in linear channels, information is encoded in the nonlinear
frequencies and their spectral amplitudes. Unlike most other fiber-optic
transmission schemes, this technique deals with both dispersion and
nonlinearity directly and unconditionally without the need for dispersion or
nonlinearity compensation methods. This first paper explains the mathematical
tools that underlie the method.Comment: This version contains minor updates of IEEE Transactions on
Information Theory, vol. 60, no. 7, pp. 4312--4328, July 201
Multi-way Graph Signal Processing on Tensors: Integrative analysis of irregular geometries
Graph signal processing (GSP) is an important methodology for studying data
residing on irregular structures. As acquired data is increasingly taking the
form of multi-way tensors, new signal processing tools are needed to maximally
utilize the multi-way structure within the data. In this paper, we review
modern signal processing frameworks generalizing GSP to multi-way data,
starting from graph signals coupled to familiar regular axes such as time in
sensor networks, and then extending to general graphs across all tensor modes.
This widely applicable paradigm motivates reformulating and improving upon
classical problems and approaches to creatively address the challenges in
tensor-based data. We synthesize common themes arising from current efforts to
combine GSP with tensor analysis and highlight future directions in extending
GSP to the multi-way paradigm.Comment: In review for IEEE Signal Processing Magazin
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Embedding classical dynamics in a quantum computer
We develop a framework for simulating measure-preserving, ergodic dynamical
systems on a quantum computer. Our approach provides a new operator-theoretic
representation of classical dynamics by combining ergodic theory with quantum
information science. The resulting quantum embedding of classical dynamics
(QECD) enables efficient simulation of spaces of classical observables with
exponentially large dimension using a quadratic number of quantum gates. The
QECD framework is based on a quantum feature map for representing classical
states by density operators on a reproducing kernel Hilbert space, , and an embedding of classical observables into self-adjoint operators on
. In this scheme, quantum states and observables evolve unitarily
under the lifted action of Koopman evolution operators of the classical system.
Moreover, by virtue of the reproducing property of , the quantum
system is pointwise-consistent with the underlying classical dynamics. To
achieve an exponential quantum computational advantage, we project the state of
the quantum system to a density matrix on a -dimensional tensor product
Hilbert space associated with qubits. By employing discrete Fourier-Walsh
transforms, the evolution operator of the finite-dimensional quantum system is
factorized into tensor product form, enabling implementation through a quantum
circuit of size . Furthermore, the circuit features a state preparation
stage, also of size , and a quantum Fourier transform stage of size
, which makes predictions of observables possible by measurement in the
standard computational basis. We prove theoretical convergence results for
these predictions as . We present simulated quantum circuit
experiments in Qiskit Aer, as well as actual experiments on the IBM Quantum
System One.Comment: 42 pages, 9 figure
Decomposition of Gaussian processes, and factorization of positive definite kernels
We establish a duality for two factorization questions, one for general
positive definite (p.d) kernels , and the other for Gaussian processes, say
. The latter notion, for Gaussian processes is stated via Ito-integration.
Our approach to factorization for p.d. kernels is intuitively motivated by
matrix factorizations, but in infinite dimensions, subtle measure theoretic
issues must be addressed. Consider a given p.d. kernel , presented as a
covariance kernel for a Gaussian process . We then give an explicit duality
for these two seemingly different notions of factorization, for p.d. kernel
, vs for Gaussian process . Our result is in the form of an explicit
correspondence. It states that the analytic data which determine the variety of
factorizations for is the exact same as that which yield factorizations for
. Examples and applications are included: point-processes, sampling schemes,
constructive discretization, graph-Laplacians, and boundary-value problems
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
- …