7,935 research outputs found

    A survey of the state of the art and focused research in range systems, task 2

    Get PDF
    Contract generated publications are compiled which describe the research activities for the reporting period. Study topics include: equivalent configurations of systolic arrays; least squares estimation algorithms with systolic array architectures; modeling and equilization of nonlinear bandlimited satellite channels; and least squares estimation and Kalman filtering by systolic arrays

    A 2D DWT architecture suitable for the Embedded Zerotree Wavelet Algorithm

    Get PDF
    Digital Imaging has had an enormous impact on industrial applications such as the Internet and video-phone systems. However, demand for industrial applications is growing enormously. In particular, internet application users are, growing at a near exponential rate. The sharp increase in applications using digital images has caused much emphasis on the fields of image coding, storage, processing and communications. New techniques are continuously developed with the main aim of increasing efficiency. Image coding is in particular a field of great commercial interest. A digital image requires a large amount of data to be created. This large amount of data causes many problems when storing, transmitting or processing the image. Reducing the amount of data that can be used to represent an image is the main objective of image coding. Since the main objective is to reduce the amount of data that represents an image, various techniques have been developed and are continuously developed to increase efficiency. The JPEG image coding standard has enjoyed widespread acceptance, and the industry continues to explore its various implementation issues. However, recent research indicates multiresolution based image coding is a far superior alternative. A recent development in the field of image coding is the use of Embedded Zerotree Wavelet (EZW) as the technique to achieve image compression. One of The aims of this theses is to explain how this technique is superior to other current coding standards. It will be seen that an essential part orthis method of image coding is the use of multi resolution analysis, a subband system whereby the subbands arc logarithmically spaced in frequency and represent an octave band decomposition. The block structure that implements this function is termed the two dimensional Discrete Wavelet Transform (2D-DWT). The 20 DWT is achieved by several architectures and these are analysed in order to choose the best suitable architecture for the EZW coder. Finally, this architecture is implemented and verified using the Synopsys Behavioural Compiler and recommendations are made based on experimental findings

    VHDL design and simulation for embedded zerotree wavelet quantisation

    Get PDF
    This thesis discusses a highly effective still image compression algorithm – The Embedded Zerotree Wavelets coding technique, as it is called. This technique is simple but achieves a remarkable result. The image is wavelet-transformed, symbolically coded and successive quantised, therefore the compression and transmission/storage saving can be achieved by utilising the structure of zerotree. The algorithm was first proposed by Jerome M. Shapiro in 1993, however to minimise the memory usage and speeding up the EZW processor, a Depth First Search method is used to transverse across the image rather than Breadth First Search method as initially discussed in Shapiro\u27s paper (Shapiro, 1993). The project\u27s primary objective is to simulate the EZW algorithm from a basic building block of 8 by 8 matrix to a well-known reference image such Lenna of 256 by 256 matrix. Hence the algorithm performance can be measured, for instance its peak signal to noise ratio can be calculated. The software environment used for the simulation is a Very-High Speed Integrated Circuits - Hardware Description Language such Peak VHDL, PC based version. This will lead to the second phase of the project. The secondary objective is to test the algorithm at a hardware level, such FPGA for a rapid prototype implementation only if the project time permits

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Wavelet transform methods for identifying onset of SEMG activity

    Get PDF
    Quantifying improvements in motor control is predicated on the accurate identification of the onset of surface electromyograpic (sEMG) activity. Applying methods from wavelet theory developed in the past decade to digitized signals, a robust algorithm has been designed for use with sEMG collected during reaching tasks executed with the less-affected arm of stroke patients. The method applied both Discretized Continuous Wavelet Transforms (CWT) and Discrete Wavelet Transforms (DWT) for event detection and no-lag filtering, respectively. Input parameters were extracted from the assessed signals. The onset times found in the sEMG signals using the wavelet method were compared with physiological instants of motion onset, determined from video data. Robustness was evaluated by considering the response in onset time with variations of input parameter values. The wavelet method found physiologically relevant onset times in all signals, averaging 147 ms prior to motion onset, compared to predicted onset latencies of 90-110 ins. Latency exhibited slight dependence on subject, but no other variables

    Applying matrix decomposition techniques to edge detection operators

    Get PDF
    In this paper decomposition techniques are applied to derivative operators, used for image edge detection. It is shown that the application of decomposition techniques to common edge detectors can result in substantial savings in computing time. For a 25x25 Laplacian of Gaussian, mask, an improvement of six times less arithmetic operations is achieved when decomposition techniques are applied.We also show that these techniques are advantageous for hardware realization of the filters. The memory required to a 2-D (nxn)-th order FIR filter direct realization with distributed arithmetic is O(2(n+1) ) while the worst case for the decomposed filter is O(n x 2n)

    Modular decomposition techniques for stored-logic digital filters

    Get PDF
    Digital filtering is an important signal processing technique whose theory is now well established. At present, however, there are no well-defined and systematic methods available for realising digital filters in hardware. This project aims to develop such methods which are general and technology independent, and adopts a systems and sub-systems design philosophy. The realisation problem is approached in a new way using concepts from finite-automata theory and implementing complete digital filter sections as stored-logic units. Two methods are introduced and developed. [Continues.

    Stochastic Analysis of the LMS Algorithm for System Identification with Subspace Inputs

    Get PDF
    This paper studies the behavior of the low rank LMS adaptive algorithm for the general case in which the input transformation may not capture the exact input subspace. It is shown that the Independence Theory and the independent additive noise model are not applicable to this case. A new theoretical model for the weight mean and fluctuation behaviors is developed which incorporates the correlation between successive data vectors (as opposed to the Independence Theory model). The new theory is applied to a network echo cancellation scheme which uses partial-Haar input vector transformations. Comparison of the new model predictions with Monte Carlo simulations shows good-to-excellent agreement, certainly much better than predicted by the Independence Theory based model available in the literature
    corecore