24 research outputs found

    VHDL modeling and synthesis of the JPEG-XR inverse transform

    Get PDF
    This work presents a pipelined VHDL implementation of the inverse lapped biorthogonal transform used in the decompression process of the soon to be released JPEG-XR still image standard format. This inverse transform involves integer only calculations using lifting operations and Kronecker products. Divisions and multiplications by small integer coefficients are implemented using a bit shift and add technique resulting in a multiplier-less implementation with 736 instances of addition. When targeted to an Altera Stratix II FPGA with a 50 MHz system clock, this design is capable of completing the inverse transform of an 8400 x 6600 pixel image in less than 70 ms

    Modeling and synthesis of the HD photo compression algorithm

    Get PDF
    The primary goal of this thesis is to implement the HD Photo encoding algorithm using Verilog HDL in hardware. The HD Photo algorithm is relatively new and offers several advantages over other digital still continuous tone image compression algorithms and is currently under review by the JPEG committee to become the next JPEG standard, JPEG XR. HD Photo was chosen to become the next JPEG standard because it has a computationally light domain change transform, achieves high compression ratios, and offers several other improvements like its ability to supports a wide variety of pixel formats. HD Photo’s compression algorithm has similar image path to that of the baseline JPEG but differs in a few key areas. Instead of a discrete cosine transform HD Photo leverages a lapped biorthogonal transform. HD Photo also has adaptive coefficient prediction and scanning stages to help furnish high compression ratios at lower implementation costs. In this thesis, the HD Photo compression algorithm is implemented in Verilog HDL, and three key stages are further synthesized with Altera’s Quartus II design suite with a target device of a Stratix III FPGA. Several images are used for testing for quality and speed comparisons between HD Photo and the current JPEG standard using the HD Photo plug-in for Adobe’s Photoshop CS3. The compression ratio when compared to the current baseline JPEG standard is about 2x so the same quality image can be stored in half the space. Performance metrics are derived from the Quartus II synthesis results. These are approximately 108,866 / 270,400 ALUTs (40%), a 10 ns clock cycle (100 MHz), and a power estimate of 1924.81 mW

    A Research on Learned Image/Video Restoration and Compression for Solving Real-World Degradation

    Get PDF
    The adage, "A picture is worth a thousand words", has proved the effectiveness of image and video in delivering information. Hence, the Internet becomes wonderful when we can share image/video media with people worldwide in this digital era. It must be more incredible if image/video media can precisely show what we see in real life with our eyes. Unfortunately, due to natural causes (e.g., shooting devices and environments) or artificial causes (e.g., image/video compression sacrificing information to achieve better transmission), the image/video media is not always in the best visual quality which human expects to see (ground-truth), reducing user experience in receiving the information. The loss of an image compared with its ground-truth is called degradation, and the act of solving degradation is called restoration. Even though many advanced techniques have been proposed to restore degraded images/videos, the real-world degradation remains unsolved. Hence, this thesis will dive into and solve specific types of real-world degradation, including (1) artificial degradation in image/video compression and (2) naturally affected degradation in smartphone photo scanning.Regarding (1), we leverage deep learning techniques to solve compression degradation and recover other missing information caused by our effort in reducing compression complexity. Concretely, we sacrifice numerous pixels by down-sampling and color information. It creates a new challenge in compensating for the massively missing information through down-sampling, color removal, and compression. By adopting advanced techniques in computer vision, we propose a specific deep neural network, named restoration-reconstruction deep neural network (RR-DnCNN), to solve Super-Resolution with compression degradation. Furthermore, we also introduce a scheme to compensate for color information with Color Learning and enhance image quality with Deep Motion Compensation for P-frame coding. As a result, our works outperform the standard codec and the previous works in the field.Regarding (2), one solution is to train a supervised deep neural network on many digital images and smartphone-scanned versions. However, it requires a high labor cost, leading to limited training data. Previous works create training pairs by simulating degradation using low-level image processing techniques. Their synthetic images are then formed with perfectly scanned photos in latent space. Even so, the real-world degradation in smartphone photo scanning remains unsolved since it is more complicated due to lens defocus, low-cost cameras, losing details via printing. Besides, locally structural misalignment still occurs in data due to distorted shapes captured in a 3-D world, reducing restoration performance and the reliability of the quantitative evaluation. To address these problems, we propose a semi-supervised Deep Photo Scan (DPScan). First, we present a way to produce real-world degradation and provide the DIV2K-SCAN dataset for smartphone-scanned photo restoration. Also, Local Alignment is proposed to reduce the minor misalignment remaining in data. Second, we simulate many different variants of the real-world degradation using low-level image transformation to gain a generalization in smartphone-scanned image properties, then train a degradation network to learn how to degrade unscanned images as if a smartphone scanned them. Finally, we propose a Semi-Supervised Learning that allows our restoration network to be trained on both scanned and unscanned images, diversifying training image content. As a result, the proposed DPScan quantitatively and qualitatively outperforms its baseline architecture, state-of-the-art academic research, and industrial products in the field.博士(工学)法政大学 (Hosei University

    High-performance hardware accelerators for image processing in space applications

    Get PDF
    Mars is a hard place to reach. While there have been many notable success stories in getting probes to the Red Planet, the historical record is full of bad news. The success rate for actually landing on the Martian surface is even worse, roughly 30%. This low success rate must be mainly credited to the Mars environment characteristics. In the Mars atmosphere strong winds frequently breath. This phenomena usually modifies the lander descending trajectory diverging it from the target one. Moreover, the Mars surface is not the best place where performing a safe land. It is pitched by many and close craters and huge stones, and characterized by huge mountains and hills (e.g., Olympus Mons is 648 km in diameter and 27 km tall). For these reasons a mission failure due to a landing in huge craters, on big stones or on part of the surface characterized by a high slope is highly probable. In the last years, all space agencies have increased their research efforts in order to enhance the success rate of Mars missions. In particular, the two hottest research topics are: the active debris removal and the guided landing on Mars. The former aims at finding new methods to remove space debris exploiting unmanned spacecrafts. These must be able to autonomously: detect a debris, analyses it, in order to extract its characteristics in terms of weight, speed and dimension, and, eventually, rendezvous with it. In order to perform these tasks, the spacecraft must have high vision capabilities. In other words, it must be able to take pictures and process them with very complex image processing algorithms in order to detect, track and analyse the debris. The latter aims at increasing the landing point precision (i.e., landing ellipse) on Mars. Future space-missions will increasingly adopt Video Based Navigation systems to assist the entry, descent and landing (EDL) phase of space modules (e.g., spacecrafts), enhancing the precision of automatic EDL navigation systems. For instance, recent space exploration missions, e.g., Spirity, Oppurtunity, and Curiosity, made use of an EDL procedure aiming at following a fixed and precomputed descending trajectory to reach a precise landing point. This approach guarantees a maximum landing point precision of 20 km. By comparing this data with the Mars environment characteristics, it is possible to understand how the mission failure probability still remains really high. A very challenging problem is to design an autonomous-guided EDL system able to even more reduce the landing ellipse, guaranteeing to avoid the landing in dangerous area of Mars surface (e.g., huge craters or big stones) that could lead to the mission failure. The autonomous behaviour of the system is mandatory since a manual driven approach is not feasible due to the distance between Earth and Mars. Since this distance varies from 56 to 100 million of km approximately due to the orbit eccentricity, even if a signal transmission at the light speed could be possible, in the best case the transmission time would be around 31 minutes, exceeding so the overall duration of the EDL phase. In both applications, algorithms must guarantee self-adaptability to the environmental conditions. Since the Mars (and in general the space) harsh conditions are difficult to be predicted at design time, these algorithms must be able to automatically tune the internal parameters depending on the current conditions. Moreover, real-time performances are another key factor. Since a software implementation of these computational intensive tasks cannot reach the required performances, these algorithms must be accelerated via hardware. For this reasons, this thesis presents my research work done on advanced image processing algorithms for space applications and the associated hardware accelerators. My research activity has been focused on both the algorithm and their hardware implementations. Concerning the first aspect, I mainly focused my research effort to integrate self-adaptability features in the existing algorithms. While concerning the second, I studied and validated a methodology to efficiently develop, verify and validate hardware components aimed at accelerating video-based applications. This approach allowed me to develop and test high performance hardware accelerators that strongly overcome the performances of the actual state-of-the-art implementations. The thesis is organized in four main chapters. Chapter 2 starts with a brief introduction about the story of digital image processing. The main content of this chapter is the description of space missions in which digital image processing has a key role. A major effort has been spent on the missions in which my research activity has a substantial impact. In particular, for these missions, this chapter deeply analizes and evaluates the state-of-the-art approaches and algorithms. Chapter 3 analyzes and compares the two technologies used to implement high performances hardware accelerators, i.e., Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs). Thanks to this information the reader may understand the main reasons behind the decision of space agencies to exploit FPGAs instead of ASICs for high-performance hardware accelerators in space missions, even if FPGAs are more sensible to Single Event Upsets (i.e., transient error induced on hardware component by alpha particles and solar radiation in space). Moreover, this chapter deeply describes the three available space-grade FPGA technologies (i.e., One-time Programmable, Flash-based, and SRAM-based), and the main fault-mitigation techniques against SEUs that are mandatory for employing space-grade FPGAs in actual missions. Chapter 4 describes one of the main contribution of my research work: a library of high-performance hardware accelerators for image processing in space applications. The basic idea behind this library is to offer to designers a set of validated hardware components able to strongly speed up the basic image processing operations commonly used in an image processing chain. In other words, these components can be directly used as elementary building blocks to easily create a complex image processing system, without wasting time in the debug and validation phase. This library groups the proposed hardware accelerators in IP-core families. The components contained in a same family share the same provided functionality and input/output interface. This harmonization in the I/O interface enables to substitute, inside a complex image processing system, components of the same family without requiring modifications to the system communication infrastructure. In addition to the analysis of the internal architecture of the proposed components, another important aspect of this chapter is the methodology used to develop, verify and validate the proposed high performance image processing hardware accelerators. This methodology involves the usage of different programming and hardware description languages in order to support the designer from the algorithm modelling up to the hardware implementation and validation. Chapter 5 presents the proposed complex image processing systems. In particular, it exploits a set of actual case studies, associated with the most recent space agency needs, to show how the hardware accelerator components can be assembled to build a complex image processing system. In addition to the hardware accelerators contained in the library, the described complex system embeds innovative ad-hoc hardware components and software routines able to provide high performance and self-adaptable image processing functionalities. To prove the benefits of the proposed methodology, each case study is concluded with a comparison with the current state-of-the-art implementations, highlighting the benefits in terms of performances and self-adaptability to the environmental conditions
    corecore