150 research outputs found

    Dual-image-based reversible data hiding scheme with integrity verification using exploiting modification direction

    Get PDF
    Abstract(#br)In this paper, a novel dual-image-based reversible data hiding scheme using exploiting modification direction (EMD) is proposed. This scheme embeds two 5-base secret digits into each pixel pair of the cover image simultaneously according to the EMD matrix to generate two stego-pixel pairs. By shifting these stego-pixel pairs to the appropriate locations in some cases, two meaningful shadows are produced. The secret data can be extracted accurately, and the cover image can be reconstructed completely in the data extraction and the image reconstruction procedure, respectively. Experimental results show that our scheme outperforms the comparative methods in terms of image quality and embedding ratio. Pixel-value differencing (PVD) histogram analysis reveals that our scheme..

    Error Correction and Concealment of Bock Based, Motion-Compensated Temporal Predition, Transform Coded Video

    Get PDF
    Error Correction and Concealment of Block Based, Motion-Compensated Temporal Prediction, Transform Coded Video David L. Robie 133 Pages Directed by Dr. Russell M. Mersereau The use of the Internet and wireless networks to bring multimedia to the consumer continues to expand. The transmission of these products is always subject to corruption due to errors such as bit errors or lost and ill-timed packets; however, in many cases, such as real time video transmission, retransmission request (ARQ) is not practical. Therefore receivers must be capable of recovering from corrupted data. Errors can be mitigated using forward error correction in the encoder or error concealment techniques in the decoder. This thesis investigates the use of forward error correction (FEC) techniques in the encoder and error concealment in the decoder in block-based, motion-compensated, temporal prediction, transform codecs. It will show improvement over standard FEC applications and improvements in error concealment relative to the Motion Picture Experts Group (MPEG) standard. To this end, this dissertation will describe the following contributions and proofs-of-concept in the area of error concealment and correction in block-based video transmission. A temporal error concealment algorithm which uses motion-compensated macroblocks from previous frames. A spatial error concealment algorithm which uses the Hough transform to detect edges in both foreground and background colors and using directional interpolation or directional filtering to provide improved edge reproduction. A codec which uses data hiding to transmit error correction information. An enhanced codec which builds upon the last by improving the performance of the codec in the error-free environment while maintaining excellent error recovery capabilities. A method to allocate Reed-Solomon (R-S) packet-based forward error correction that will decrease distortion (using a PSNR metric) at the receiver compared to standard FEC techniques. Finally, under the constraints of a constant bit rate, the tradeoff between traditional R-S FEC and alternate forward concealment information (FCI) is evaluated. Each of these developments is compared and contrasted to state of the art techniques and are able to show improvements using widely accepted metrics. The dissertation concludes with a discussion of future work.Ph.D.Committee Chair: Mersereau, Russell; Committee Member: Altunbasak, Yucel; Committee Member: Fekri, Faramarz; Committee Member: Lanterman, Aaron; Committee Member: Zhou, Haomi

    Dynamically parallel CAMSHIFT: GPU accelerated object tracking in digital video

    Get PDF
    The CAMSHIFT algorithm is widely used for tracking dynamically sized and positioned objects in real-time applications. In spite of its extensive study on the platform of sequential CPU, its research on massively parallel Graphical Processing Unit (GPU) platform is quite limited. In this work, we designed and implemented two different parallel algorithms for CAMSHIFT using CUDA. The first design performs calculations on the GPU, but requires iterative data transfers back to the host CPU for condition checking, which bottlenecks the entire program. In the second design, we propose an enhanced parallel reduction-based CAMSHIFT using dynamic parallelism to reduce overhead of data transfers between the CPU and GPU. Test results for a 400 by 400 search window show that the second design is up to five times faster than the first design and nine times faster than a pure CPU implementation. We also investigate the deployment of dynamic parallelism for multiple object tracking using CAMSHIFT --Leaf iv

    CONVERGENCE IMPROVEMENT OF ITERATIVE DECODERS

    Get PDF
    Iterative decoding techniques shaked the waters of the error correction and communications field in general. Their amazing compromise between complexity and performance offered much more freedom in code design and made highly complex codes, that were being considered undecodable until recently, part of almost any communication system. Nevertheless, iterative decoding is a sub-optimum decoding method and as such, it has attracted huge research interest. But the iterative decoder still hides many of its secrets, as it has not been possible yet to fully describe its behaviour and its cost function. This work presents the convergence problem of iterative decoding from various angles and explores methods for reducing any sub-optimalities on its operation. The decoding algorithms for both LDPC and turbo codes were investigated and aspects that contribute to convergence problems were identified. A new algorithm was proposed, capable of providing considerable coding gain in any iterative scheme. Moreover, it was shown that for some codes the proposed algorithm is sufficient to eliminate any sub-optimality and perform maximum likelihood decoding. Its performance and efficiency was compared to that of other convergence improvement schemes. Various conditions that can be considered critical to the outcome of the iterative decoder were also investigated and the decoding algorithm of LDPC codes was followed analytically to verify the experimental results

    Information security and assurance : Proceedings international conference, ISA 2012, Shanghai China, April 2012

    Full text link

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Case Studies on Optimizing Algorithms for GPU Architectures

    Get PDF
    Modern GPUs are complex, massively multi-threaded, and high-performance. Programmers naturally gravitate towards taking advantage of this high performance for achieving faster results. However, in order to do so successfully, programmers must first understand and then master a new set of skills – writing parallel code, using different types of parallelism, adapting to GPU architectural features, and understanding issues that limit performance. In order to ease this learning process and help GPU programmers become productive more quickly, this dissertation introduces three data access skeletons (DASks) – Block, Column, and Row -- and two block access skeletons (BASks) – Block-By-Block and Warp-by-Warp. Each “skeleton” provides a high-performance implementation framework that partitions data arrays into data blocks and then iterates over those blocks. The programmer must still write “body” methods on individual data blocks to solve their specific problem. These skeletons provide efficient machine dependent data access patterns for use on GPUs. DASks group n data elements into m fixed size data blocks. These m data block are then partitioned across p thread blocks using a 1D or 2D layout pattern. The fixed-size data blocks are parameterized using three C++ template parameters – nWork, WarpSize, and nWarps. Generic programming techniques use these three parameters to enable performance experiments on three different types of parallelism – instruction-level parallelism (ILP), data-level parallelism (DLP), and thread-level parallelism (TLP). These different DASks and BASks are introduced using a simple memory I/O (Copy) case study. A nearest neighbor search case study resulted in the development of DASKs and BASks but does not use these skeletons itself. Three additional case studies – Reduce/Scan, Histogram, and Radix Sort -- demonstrate DASks and BASks in action on parallel primitives and also provides more valuable performance lessons.Doctor of Philosoph

    Hexarray: A Novel Self-Reconfigurable Hardware System

    Get PDF
    Evolvable hardware (EHW) is a powerful autonomous system for adapting and finding solutions within a changing environment. EHW consists of two main components: a reconfigurable hardware core and an evolutionary algorithm. The majority of prior research focuses on improving either the reconfigurable hardware or the evolutionary algorithm in place, but not both. Thus, current implementations suffer from being application oriented and having slow reconfiguration times, low efficiencies, and less routing flexibility. In this work, a novel evolvable hardware platform is proposed that combines a novel reconfigurable hardware core and a novel evolutionary algorithm. The proposed reconfigurable hardware core is a systolic array, which is called HexArray. HexArray was constructed using processing elements with a redesigned architecture, called HexCells, which provide routing flexibility and support for hybrid reconfiguration schemes. The improved evolutionary algorithm is a genome-aware genetic algorithm (GAGA) that accelerates evolution. Guided by a fitness function the GAGA utilizes context-aware genetic operators to evolve solutions. The operators are genome-aware constrained (GAC) selection, genome-aware mutation (GAM), and genome-aware crossover (GAX). The GAC selection operator improves parallelism and reduces the redundant evaluations. The GAM operator restricts the mutation to the part of the genome that affects the selected output. The GAX operator cascades, interleaves, or parallel-recombines genomes at the cell level to generate better genomes. These operators improve evolution while not limiting the algorithm from exploring all areas of a solution space. The system was implemented on a SoC that includes a programmable logic (i.e., field-programmable gate array) to realize the HexArray and a processing system to execute the GAGA. A computationally intensive application that evolves adaptive filters for image processing was chosen as a case study and used to conduct a set of experiments to prove the developed system robustness. Through an iterative process using the genetic operators and a fitness function, the EHW system configures and adapts itself to evolve fitter solutions. In a relatively short time (e.g., seconds), HexArray is able to evolve autonomously to the desired filter. By exploiting the routing flexibility in the HexArray architecture, the EHW has a simple yet effective mechanism to detect and tolerate faulty cells, which improves system reliability. Finally, a mechanism that accelerates the evolution process by hiding the reconfiguration time in an “evolve-while-reconfigure” process is presented. In this process, the GAGA utilizes the array routing flexibility to bypass cells that are being configured and evaluates several genomes in parallel

    Simplified vector-thread architectures for flexible and efficient data-parallel accelerators

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 165-170).This thesis explores a new approach to building data-parallel accelerators that is based on simplifying the instruction set, microarchitecture, and programming methodology for a vector-thread architecture. The thesis begins by categorizing regular and irregular data-level parallelism (DLP), before presenting several architectural design patterns for data-parallel accelerators including the multiple-instruction multiple-data (MIMD) pattern, the vector single-instruction multiple-data (vector-SIMD) pattern, the single-instruction multiple-thread (SIMT) pattern, and the vector-thread (VT) pattern. Our recently proposed VT pattern includes many control threads that each manage their own array of microthreads. The control thread uses vector memory instructions to efficiently move data and vector fetch instructions to broadcast scalar instructions to all microthreads. These vector mechanisms are complemented by the ability for each microthread to direct its own control flow. In this thesis, I introduce various techniques for building simplified instances of the VT pattern. I propose unifying the VT control-thread and microthread scalar instruction sets to simplify the microarchitecture and programming methodology. I propose a new single-lane VT microarchitecture based on minimal changes to the vector-SIMD pattern.(cont.) Single-lane cores are simpler to implement than multi-lane cores and can achieve similar energy efficiency. This new microarchitecture uses control processor embedding to mitigate the area overhead of single-lane cores, and uses vector fragments to more efficiently handle both regular and irregular DLP as compared to previous VT architectures. I also propose an explicitly data-parallel VT programming methodology that is based on a slightly modified scalar compiler. This methodology is easier to use than assembly programming, yet simpler to implement than an automatically vectorizing compiler. To evaluate these ideas, we have begun implementing the Maven data-parallel accelerator. This thesis compares a simplified Maven VT core to MIMD, vector-SIMD, and SIMT cores. We have implemented these cores with an ASIC methodology, and I use the resulting gate-level models to evaluate the area, performance, and energy of several compiled microbenchmarks. This work is the first detailed quantitative comparison of the VT pattern to other patterns. My results suggest that future data-parallel accelerators based on simplified VT architectures should be able to combine the energy efficiency of vector-SIMD accelerators with the flexibility of MIMD accelerators.by Christopher Francis Batten.Ph.D
    • …
    corecore