3,923 research outputs found

    WAVELET-DCT BASED IMAGE CODER FOR VIDEO CODING APPLICATIONS

    Get PDF
    This project is about the implementation ofWavelet-DCT intra-frame coder for video coding applications. Wavelet-DCT is a novel algorithm that uses Forward Discrete Wavelet Transform (DWT) to compute DCT. It is proved that the algorithm has better compression performance for difference images compared to conventional DCT. This is possible since the algorithm allows discarding insignificant DWT coefficients or more popularly known thresholding the DWT coefficients while computing the DCT. In video coder applications, wavelet-DCT is capable to achieve greater compression. This project is a feasibility study on the performance ofWavelet-DCT in video coder applications. ASIMULINK model for conventional intra-frame coder is developed and tested, with very significant data bit reduction achieved. Then, the conventional DCT block has been replaced with a Wavelet-DCT block. In the study, on one hand, experiment is conducted on difference image for conventional intra-frame coder; on the other, the same difference image with Wavelet-DCT based intra-frame coder. The thresholding algorithm is used to remove some ofthe insignificant DWT coefficients from the difference image. The main objective is to achieve a better compression capability for difference image within video coding applications. The project's experimental results supports our claim that implementation ofWavelet-DCT in intraframe coder within a video coding application could improve the system's performance with a greater compression ratio at the same Mean Squared Error

    An innovative two-stage data compression scheme using adaptive block merging technique

    Get PDF
    Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression

    Underwater radio frequency image sensor using progressive image compression and region of interest

    Get PDF
    The increasing demand for underwater robotic intervention systems around the world in several application domains requires more versatile and inexpensive systems. By using a wireless communication system, supervised semi-autonomous robots have freedom of movement; however, the limited and varying bandwidth of underwater radio frequency (RF) channels is a major obstacle for the operator to get camera feedback and supervise the intervention. This paper proposes the use of progressive (embedded) image compression and region of interest (ROI) for the design of an underwater image sensor to be installed in an autonomous underwater vehicle, specially when there are constraints on the available bandwidth, allowing a more agile data exchange between the vehicle and a human operator supervising the underwater intervention. The operator can dynamically decide the size, quality, frame rate, or resolution of the received images so that the available bandwidth is utilized to its fullest potential and with the required minimum latency. The paper focuses first on the description of the system, which uses a camera, an embedded Linux system, and an RF emitter installed in an OpenROV housing cylinder. The RF receiver is connected to a computer on the user side, which controls the camera monitoring parameters, including the compression inputs, such as region of interest (ROI), size of the image, and frame rate. The paper focuses on the compression subsystem and does not attempt to improve the communications physical media for better underwater RF links. Instead, it proposes a unified system that uses well-integrated modules (compression and transmission) to provide the scientific community with a higher-level protocol for image compression and transmission in sub-sea robotic interventions

    Test Stimuli Segmentation and Coding Method

    Get PDF
    Test vector coding and data transmission are the key technologies in the design-for-test of digital integrated circuits (IC). Existing parallel input methods of test stimuli can reduce test application times; however, they need to occupy multiple input ports. Thus, a novel method of test stimuli coding and data transmission was proposed to reduce the test application time of the test vectors and reduce the number of input ports required for the parallel input of test stimuli. This method was based on the segmentation of test stimuli. First, the test stimuli were evenly segmented into eight-bit wide. Second, the eight-bit data of each segment were encoded to the five-bit data according to the compatibility between the test data of each segment. The eight-bit test stimuli input can be completed in one or two clock cycles of automatic test equipment (ATE) by using the five input ports of the chip. The corresponding decoding circuit was added inside the netlist of the circuit to realize the rapid input of the test stimuli. Lastly, the ISCAS\u2789 benchmark circuit was used to conduct experiments, results of this coding method were then compared with those of the serial input method. Results show that the encoding method proposed in this study can save an average of 37% of the parallel input data width and 81.7% of the test stimuli input time. The proposed method in this study can also reduce the test application time and the cost of the IC test. The findings of this study can provide guidance for improving the scan testing method of digital IC

    Algorithms for compression of high dynamic range images and video

    Get PDF
    The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented

    Orthogonal procrustes analysis for dictionary learning in sparse linear representation

    Get PDF
    In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability

    Learning parametric dictionaries for graph signals

    Get PDF
    In sparse signal representation, the choice of a dictionary often involves a tradeoff between two desirable properties -- the ability to adapt to specific signal data and a fast implementation of the dictionary. To sparsely represent signals residing on weighted graphs, an additional design challenge is to incorporate the intrinsic geometric structure of the irregular data domain into the atoms of the dictionary. In this work, we propose a parametric dictionary learning algorithm to design data-adapted, structured dictionaries that sparsely represent graph signals. In particular, we model graph signals as combinations of overlapping local patterns. We impose the constraint that each dictionary is a concatenation of subdictionaries, with each subdictionary being a polynomial of the graph Laplacian matrix, representing a single pattern translated to different areas of the graph. The learning algorithm adapts the patterns to a training set of graph signals. Experimental results on both synthetic and real datasets demonstrate that the dictionaries learned by the proposed algorithm are competitive with and often better than unstructured dictionaries learned by state-of-the-art numerical learning algorithms in terms of sparse approximation of graph signals. In contrast to the unstructured dictionaries, however, the dictionaries learned by the proposed algorithm feature localized atoms and can be implemented in a computationally efficient manner in signal processing tasks such as compression, denoising, and classification

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Parallelism and the software-hardware interface in embedded systems

    Get PDF
    This thesis by publications addresses issues in the architecture and microarchitecture of next generation, high performance streaming Systems-on-Chip through quantifying the most important forms of parallelism in current and emerging embedded system workloads. The work consists of three major research tracks, relating to data level parallelism, thread level parallelism and the software-hardware interface which together reflect the research interests of the author as they have been formed in the last nine years. Published works confirm that parallelism at the data level is widely accepted as the most important performance leverage for the efficient execution of embedded media and telecom applications and has been exploited via a number of approaches the most efficient being vectorlSIMD architectures. A further, complementary and substantial form of parallelism exists at the thread level but this has not been researched to the same extent in the context of embedded workloads. For the efficient execution of such applications, exploitation of both forms of parallelism is of paramount importance. This calls for a new architectural approach in the software-hardware interface as its rigidity, manifested in all desktop-based and the majority of embedded CPU's, directly affects the performance ofvectorized, threaded codes. The author advocates a holistic, mature approach where parallelism is extracted via automatic means while at the same time, the traditionally rigid hardware-software interface is optimized to match the temporal and spatial behaviour of the embedded workload. This ultimate goal calls for the precise study of these forms of parallelism for a number of applications executing on theoretical models such as instruction set simulators and parallel RAM machines as well as the development of highly parametric microarchitectural frameworks to encapSUlate that functionality.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Dynamic Partial Reconfiguration for Dependable Systems

    Get PDF
    Moore’s law has served as goal and motivation for consumer electronics manufacturers in the last decades. The results in terms of processing power increase in the consumer electronics devices have been mainly achieved due to cost reduction and technology shrinking. However, reducing physical geometries mainly affects the electronic devices’ dependability, making them more sensitive to soft-errors like Single Event Transient (SET) of Single Event Upset (SEU) and hard (permanent) faults, e.g. due to aging effects. Accordingly, safety critical systems often rely on the adoption of old technology nodes, even if they introduce longer design time w.r.t. consumer electronics. In fact, functional safety requirements are increasingly pushing industry in developing innovative methodologies to design high-dependable systems with the required diagnostic coverage. On the other hand commercial off-the-shelf (COTS) devices adoption began to be considered for safety-related systems due to real-time requirements, the need for the implementation of computationally hungry algorithms and lower design costs. In this field FPGA market share is constantly increased, thanks to their flexibility and low non-recurrent engineering costs, making them suitable for a set of safety critical applications with low production volumes. The works presented in this thesis tries to face new dependability issues in modern reconfigurable systems, exploiting their special features to take proper counteractions with low impacton performances, namely Dynamic Partial Reconfiguration
    • …
    corecore