56 research outputs found

    Dimension Reduction Using Quantum Wavelet Transform on a High-Performance Reconfigurable Computer

    Get PDF
    This work is licensed under a Creative Commons Attribution 4.0 International License.The high resolution of multidimensional space-time measurements and enormity of data readout counts in applications such as particle tracking in high-energy physics (HEP) is becoming nowadays a major challenge. In this work, we propose combining dimension reduction techniques with quantum information processing for application in domains that generate large volumes of data such as HEP. More specifically, we propose using quantum wavelet transform (QWT) to reduce the dimensionality of high spatial resolution data. The quantum wavelet transform takes advantage of the principles of quantum mechanics to achieve reductions in computation time while processing exponentially larger amount of information. We develop simpler and optimized emulation architectures than what has been previously reported, to perform quantum wavelet transform on high-resolution data. We also implement the inverse quantum wavelet transform (IQWT) to accurately reconstruct the data without any losses. The algorithms are prototyped on an FPGA-based quantum emulator that supports double-precision floating-point computations. Experimental work has been performed using high-resolution image data on a state-of-the-art multinode high-performance reconfigurable computer. The experimental results show that the proposed concepts represent a feasible approach to reducing dimensionality of high spatial resolution data generated by applications such as particle tracking in high-energy physics

    Towards Complete Emulation of Quantum Algorithms using High-Performance Reconfigurable Computing

    Get PDF
    Quantum computing is a promising technology that can potentially demonstrate supremacy over classical computing in solving specific classically-intractable problems. However, in its current nascent stage, quantum computing faces major challenges. Two of the main challenges are quantum state decoherence and low scalability of current quantum devices. Decoherence is a process in which the state of the quantum computer is destroyed by interaction with the environment. Decoherence places constraints on the realistic applicability of quantum algorithms as real-life applications usually require complex equivalent quantum circuits to be realized. For example, encoding classical data on quantum computers for solving I/O and data-intensive applications generally requires complex quantum circuits that violate decoherence constraints. In addition, current quantum devices are of intermediate scale, having low quantum bit (qubit) counts and often producing inaccurate or noisy measurements. Consequently, benchmarking of existing quantum algorithms and the investigation of new applications are heavily dependent on classical simulations that use costly, resource-intensive computing platforms. Hardware-based emulation has been alternatively proposed as a more cost-effective and power-efficient approach. Hardware-based emulation methods can take advantage of hardware parallelism and acceleration to produce results at a higher throughput and lower power requirements.This work proposes a hardware-based emulation methodology for quantum algorithms, using cost-effective Field Programmable Gate Array (FPGA) technology. The proposed methodology consists of three components that are required for complete emulation of quantum algorithms; the first component models classical-to-quantum (C2Q) data encoding, the second emulates the behavior of quantum algorithms, and the third models the process of measuring the quantum state and extracting classical information, i.e., quantum-to-classical (Q2C) data decoding. The proposed emulation methodology is used to investigate and optimize methods for C2Q/Q2C data encoding/decoding, as well as several important quantum algorithms such as Quantum Fourier Transform (QFT), Quantum Haar Transform (QHT), and Quantum Grover’s Search (QGS). This work delivers contributions in terms of reducing complexities of quantum circuits, extending and optimizing quantum algorithms, and developing new quantum applications. For example, decoherence-optimized circuits for C2Q/Q2C data encoding/decoding are proposed and evaluated using the proposed emulation methodology. Multi-level decomposable forms of optimized QHT circuits are presented and used to demonstrate dimension reduction of high-resolution data. Additionally, a novel extension to the QGS algorithm is proposed to enable search for dynamically changing multi-patterns of unordered data. Finally, a novel quantum application is presented that combines QHT and dynamic multi-pattern QGS to perform pattern recognition using dimension reduction on high-resolution spatio-spectral data. For higher emulation performance and scalability of the framework, hardware design techniques and hardware architectural optimizations are investigated and proposed. The emulation architectures are designed and implemented on a high-performance reconfigurable computer (HPRC). For reference and comparison, implementations of the proposed quantum circuits are also performed on a state-of-the-art quantum computer. Experimental results show that the proposed hardware architectures enable emulation of quantum algorithms with higher scalability, higher accuracy, and higher throughput, compared to existing hardware-based emulators. As a case study, quantum image processing using multi-spectral images is considered for the experimental evaluations. The analysis and results of this work demonstrate that quantum computers and methodologies based on quantum algorithms will be highly useful in realistic data-intensive domains such as remote-sensing hyperspectral imagery and high-energy physics (HEP)

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Design and simulation of compressive snapshot multispectral imaging system

    Get PDF
    Compressive Snapshot Spectral Imaging combines compressive sensing and snapshot spectral imaging (SSI) for restoring the image of the scene in both spatial and spectral contexts by using only a fewer number of sampling measurements of the captured image under the sparsity assumption. SSI is often realised through a coded aperture mask together with a single dispersive element as the main spatial modulator to implement compressive sampling. As one of the representative frameworks in this field, Coded Aperture Snapshot Spectral Imagers (CASSI) has prototyped a low-cost, compact platform to achieve compressive snapshot spectral imaging in the recent decade. Active research in the field includes advanced de-compressive recovery algorithms and also the employment of more sophisticated optical hardware for the design of more robust SSI system. This research addresses more of the latter direction and it focuses on how the CASSI framework can be further developed for various applications such as magnetic resonance imaging for medical diagnosis, enhancement of radar imaging system, facial expression detection and recognition, digital signal processing with sparse structure in terms of image denoising, image super-resolution and image classification. This thesis presents a summary of the research conducted over the past 4 years about the basic property of the CASSI system, which leads to the development of the spectral tuneable SSI design proposed during the course of the PhD study. This new design utilises a Dual-Prism assembly to embed the capability of wavelength-tuning without physically changing its optical elements. This Dual-Prism CASSI (DP-CASSI) adapts to dynamic environments far better than all the CASSI types of imagers published in the open domain which only function for a fixed set of wavelengths. This piece of work has been vii accepted by journal papers for publication. Other contributions of this research has been the enhancement of the Single-Prism (SP-CASSI) architecture and to produce a snapshot system with less aberration and better image quality than that published in the open domain. Moreover, the thesis also provides information about optical design of four different types of CASSI with slightly in-depth analysis about their optical system constructions, optical evaluations of system structure and their dispersive capabilities as the background of this research. Then a more detailed description of the proposed DP-CASSI with respected to its design and performance evaluation particularly its dispersion characteristics and the effects of system resolutions, are given. System verifications were conducted through ray-tracing simulation in three-dimension visualisation environments and the spectral characteristics of the targets are compared with that of the ground truth. The spectral tuning of the proposed DP-CASSI is achieved by adjusting the air gap displacement of dual-prism assembly. Typical spectral shifts of about 5 nm at 450 mm and 10 nm at 650 nm wavelength have been achieved in the present design when the air gap of the dual-prism is changed from 3.44 mm to 5.04 mm. The thesis summaries the optical designs, the performance and the pros and cons of the DP-CASSI syste
    • …
    corecore