51 research outputs found

    Software-based Approximate Computation Of Signal Processing Tasks

    Get PDF
    This thesis introduces a new dimension in performance scaling of signal processing systems by proposing software frameworks that achieve increased processing throughput when producing approximate results. The first contribution of this work is a new theory for accelerated computation of multimedia processing based on the concept of tight packing (Chapter 2). Usage of this theory accelerates small-dynamic-range linear signal processing tasks (such as convolution and transform decomposition) that map integers to integers, without incurring any accuracy loss. The concept of tight packing is combined with incremental computation that processes inputs in a bitplane-by-bitplane manner (Chapter 3), thereby leading to substantial throughput/distortion scalability within filtering, transform-decomposition and motion-estimation tasks. This framework also provides for region-of-interest computation and has inherent robustness to arbitrary termination of processing, imposed, for example, by a task scheduler. Finally, the concept of packed processing is extended to floating-point (lossy) matrix computations, with particular focus on the generic matrix multiplication (GEMM) routine of BLAS-3 (Chapters 4 and 5). This routine is a fundamental building block for several linear algebra and digital signal processing systems, such as face recognition and neural-network training for metadata-based retrieval systems. In order to compete with the best-performing software designs for GEMM, an implementation using single instruction, multiple data (SIMD) instructions is presented and analyzed. The proposed approach demonstrates substantial performance scaling in practice; specifically, it is shown to achieve up to twice the processing throughput of the best designs for GEMM when producing approximate results (under the same hardware). In summary, the proposed approximate computation of signal processing tasks can be selectively disabled thereby producing conventional full-precision/lower-throughput processing when deemed necessary. Importantly, the proposed software designs run on off-the-shelf computer hardware and provide for on-demand reconfiguration, depending on the input data and the precision specification (from full precision to noisy computation). Thus, the proposed approximate computation framework allows for backward compatibility and can be offered as an add-on service, creating significant competitive advantages for application developers. It can be used in mobile or high-performance computing systems when the precision of computation is not of critical importance (error-tolerant systems), or when the input data is intrinsically noisy

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    The ghost in the radiation: robust encodings of the black hole interior

    Get PDF
    We reconsider the black hole firewall puzzle, emphasizing that quantum error- correction, computational complexity, and pseudorandomness are crucial concepts for understanding the black hole interior. We assume that the Hawking radiation emitted by an old black hole is pseudorandom, meaning that it cannot be distinguished from a perfectly thermal state by any efficient quantum computation acting on the radiation alone. We then infer the existence of a subspace of the radiation system which we interpret as an encoding of the black hole interior. This encoded interior is entangled with the late outgoing Hawking quanta emitted by the old black hole, and is inaccessible to computationally bounded observers who are outside the black hole. Specifically, efficient operations acting on the radiation, those with quantum computational complexity polynomial in the entropy of the remaining black hole, commute with a complete set of logical operators acting on the encoded interior, up to corrections which are exponentially small in the entropy. Thus, under our pseudorandomness assumption, the black hole interior is well protected from exterior observers as long as the remaining black hole is macroscopic. On the other hand, if the radiation is not pseudorandom, an exterior observer may be able to create a firewall by applying a polynomial-time quantum computation to the radiation

    The ghost in the radiation: robust encodings of the black hole interior

    Get PDF
    We reconsider the black hole firewall puzzle, emphasizing that quantum error- correction, computational complexity, and pseudorandomness are crucial concepts for understanding the black hole interior. We assume that the Hawking radiation emitted by an old black hole is pseudorandom, meaning that it cannot be distinguished from a perfectly thermal state by any efficient quantum computation acting on the radiation alone. We then infer the existence of a subspace of the radiation system which we interpret as an encoding of the black hole interior. This encoded interior is entangled with the late outgoing Hawking quanta emitted by the old black hole, and is inaccessible to computationally bounded observers who are outside the black hole. Specifically, efficient operations acting on the radiation, those with quantum computational complexity polynomial in the entropy of the remaining black hole, commute with a complete set of logical operators acting on the encoded interior, up to corrections which are exponentially small in the entropy. Thus, under our pseudorandomness assumption, the black hole interior is well protected from exterior observers as long as the remaining black hole is macroscopic. On the other hand, if the radiation is not pseudorandom, an exterior observer may be able to create a firewall by applying a polynomial-time quantum computation to the radiation

    Readout and Control Beyond a Few Qubits: Scaling-up Solid State Quantum Systems

    Get PDF
    Quantum entanglement and superposition, in addition to revealing interesting physics in their own right, can be harnessed as computational resources in a machine, enabling a range of algorithms for classically intractable problems. In recent years, experiments with small numbers of qubits have been demonstrated in a range of solid-state systems, but this is far from the numbers required to realise a useful quantum computer. In addition to the qubits themselves, quantum operation requires a host of classical electronics for control and readout, and current techniques used in few-qubit systems are not scalable. This thesis presents a series of techniques for control and readout of solid-state qubits, working towards scalability by integrating classical control with the quantum technology. Two techniques for reducing the footprint associated with readout of gallium arsenide spin qubits are demonstrated. Gate electrodes, used to define the quantum dot, are also shown to be sensitive state detectors. These gate-sensors, and the more conventional Quantum Point Contacts, are then multiplexed in the frequency domain, where three-channel qubit readout and ten-channel QPC readout are demonstrated. Two types of superconducting devices are also explored. The loss in superconducting coplanar waveguide resonators is measured, and a suppression of coupling to the parasitic electromagnetic environment is demonstrated. The thesis also details software for the simulation of Josephson-junction based circuits including features beyond what is available in commercial products. Finally, an architecture for managing control of a scalable machine is proposed where classical components are distributed throughout a cryostat and cryogenic switches route control pulses to the appropriate qubits. A simple implementation of the architecture is demonstrated that incorporates a double quantum dot, a gallium arsenide switch matrix, frequency multiplexed readout, and cryogenic classical computation

    Towards practical linear optical quantum computing

    Get PDF
    Quantum computing promises a new paradigm of computation where information is processed in a way that has no classical analogue. There are a number of physical platforms conducive to quantum computation, each with a number of advantages and challenges. Single photons, manipulated using integrated linear optics, constitute a promising platform for universal quantum computation. Their low decoherence rates make them particularly favourable, however the inability to perform deterministic two-qubit gates and the issue of photon loss are challenges that need to be overcome. In this thesis we explore the construction of a linear optical quantum computer based on the cluster state model. We identify the different necessary stages: state preparation, cluster state construction and implementation of quantum error correcting codes, and address the challenges that arise in each of these stages. For the state preparation, we propose a series of linear optical circuits for the generation of small entangled states, assessing their performance under different scenarios. For the cluster state construction, we introduce a ballistic scheme which not only consumes an order of magnitude fewer resources than previously proposed schemes, but also benefits from a natural loss tolerance. Based on this scheme, we propose a full architectural blueprint with fixed physical depth. We make investigations into the resource efficiency of this architecture and propose a new multiplexing scheme which optimises the use of resources. Finally, we study the integration of quantum error-correcting codes in the linear optical scheme proposed and suggest three ways in which the linear optical scheme can be made fault-tolerant.Open Acces

    Low-power adaptive control scheme using switching activity measurement method for reconfigurable analog-to-digital converters

    Get PDF
    Power consumption is a critical issue for portable devices. The ever-increasing demand for multimode wireless applications and the growing concerns towards power-aware green technology make dynamically reconfigurable hardware an attractive solution for overcoming the power issue. This is due to its advantages of flexibility, reusability, and adaptability. During the last decade, reconfigurable analog-to-digital converters (ReADCs) have been used to support multimode wireless applications. With the ability to adaptively scale the power consumption according to different operation modes, reconfigurable devices utilise the power supply efficiently. This can prolong battery life and reduce unnecessary heat emission to the environment. However, current adaptive mechanisms for ReADCs rely upon external control signals generated using digital signal processors (DSPs) in the baseband. This thesis aims to provide a single-chip solution for real-time and low-power ReADC implementations that can adaptively change the converter resolution according to signal variations without the need of the baseband processing. Specifically, the thesis focuses on the analysis, design and implementation of a low-power digital controller unit for ReADCs. In this study, the following two important reconfigurability issues are investigated: i) the detection mechanism for an adaptive implementation, and ii) the measure of power and area overheads that are introduced by the adaptive control modules. This thesis outlines four main achievements to address these issues. The first achievement is the development of the switching activity measurement (SWAM) method to detect different signal components based upon the observation of the output of an ADC. The second achievement is a proposed adaptive algorithm for ReADCs to dynamically adjust the resolution depending upon the variations in the input signal. The third achievement is an ASIC implementation of the adaptive control module for ReADCs. The module achieves low reconfiguration overheads in terms of area and power compared with the main analog part of a ReADC. The fourth achievement is the development of a low-power noise detection module using a conventional ADC for signal improvement. Taken together, the findings from this study demonstrate the potential use of switching activity information of an ADC to adaptively control the circuits, and simultaneously expanding the functionality of the ADC in electronic systems

    Reconciliation for Satellite-Based Quantum Key Distribution

    Full text link
    This thesis reports on reconciliation schemes based on Low-Density Parity-Check (LDPC) codes in Quantum Key Distribution (QKD) protocols. It particularly focuses on a trade-off between the complexity of such reconciliation schemes and the QKD key growth, a trade-off that is critical to QKD system deployments. A key outcome of the thesis is a design of optimised schemes that maximise the QKD key growth based on finite-size keys for a range of QKD protocols. Beyond this design, the other four main contributions of the thesis are summarised as follows. First, I show that standardised short-length LDPC codes can be used for a special Discrete Variable QKD (DV-QKD) protocol and highlight the trade-off between the secret key throughput and the communication latency in space-based implementations. Second, I compare the decoding time and secret key rate performances between typical LDPC-based rate-adaptive and non-adaptive schemes for different channel conditions and show that the design of Mother codes for the rate-adaptive schemes is critical but remains an open question. Third, I demonstrate a novel design strategy that minimises the probability of the reconciliation process being the bottleneck of the overall DV-QKD system whilst achieving a target QKD rate (in bits per second) with a target ceiling on the failure probability with customised LDPC codes. Fourth, in the context of Continuous Variable QKD (CV-QKD), I construct an in-depth optimisation analysis taking both the security and the reconciliation complexity into account. The outcome of the last contribution leads to a reconciliation scheme delivering the highest secret key rate for a given processor speed which allows for the optimal solution to CV-QKD reconciliation
    corecore