29 research outputs found

    Repeated filtering in consecutive fractional Fourier domains

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent University, 1997.Thesis (Ph. D.) -- Bilkent University, 1997.Includes bibliographical references leaves 96-105.In the first part of this thesis, relationships between the fractional Fourier transformation and Fourier optical systems are analyzed to further elucidate the importance of this transformation in optics. Then in the second part, the concept of repeated filtering is considered. In this part, the repeated filtering method is interpreted in two different ways. In the first interpretation the linear transformation between input and output is constrained to be of the form of repeated filtering in consecutive domains. The applications of this constrained linear transformation to signal synthesis (beam shaping) and signal restoration are discussed. In the second interpretation, general linear systems are synthesized with repeated filtering in consecutive domains, and the synthesis of some important linear systems in signal processing and the .synthesis of optical interconnection architectures are considered for illustrative purposes. In all of the examples, when our repeated filtering method is compared with single domain filtering methods, significant improvements in performance are obtained with only modest increases in optical or digital implementation costs. Similarly, when the proposed method is compared with general linear systems, it is seen that acceptable performance may be possible with significant computational savings in implementation costs.Erden, M FatihPh.D

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Wavelets and Subband Coding

    Get PDF
    First published in 1995, Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding. The book developed the theory in both continuous and discrete time, and presented important applications. During the past decade, it filled a useful need in explaining a new view of signal processing based on flexible time-frequency analysis and its applications. Since 2007, the authors now retain the copyright and allow open access to the book

    Glosarium Matematika

    Get PDF
    273 p.; 24 cm

    Introduction to frames

    Get PDF
    This survey gives an introduction to redundant signal representations called frames. These representations have recently emerged as yet another powerful tool in the signal processing toolbox and have become popular through use in numerous applications. Our aim is to familiarize a general audience with the area, while at the same time giving a snapshot of the current state-of-the-art

    Unitary Algorithm for Nonseparable Linear Canonical Transforms Applied to Iterative Phase Retrieval

    Get PDF
    Abstract:Phase retrieval is an important tool with broad applications in optics. The GerchbergSaxton algorithm has been a workhorse in this area for many years. The algorithm extracts phase information from intensities captured in two planes related by a Fourier transform. The ability to capture the two intensities in domains other than the image and Fourier plains adds flexibility; various authors have extended the algorithm to extract phase from intensities captured in two planes related by other optical transforms, e.g., by free space propagation or a fractional Fourier transform. These generalizations are relatively simple once a unitary discrete transform is available to propagate back and forth between the two measurement planes. In the absence of such a unitary transform, errors accumulate quickly as the algorithm propagates back and forth between the two planes. Unitary transforms are available for many separable systems, but there has been limited work reported on nonseparable systems other than the gyrator transform. In this letter, we simulate a nonseparable system in a unitary way by choosing an advantageous sampling rate related to the system parameters. We demonstrate a simulation of phase retrieval from intensities in the image domain and a second domain related to the image domain by a nonseparable linear canonical transform. This work may permit the use of nonseparable systems in many design problems.Science Foundation IrelandInsight Research Centr

    Unitary Algorithm for Nonseparable Linear Canonical Transforms Applied to Iterative Phase Retrieval

    No full text
    corecore