221,331 research outputs found

    Development of Some Spatial-domain Preprocessing and Post-processing Algorithms for Better 2-D Up-scaling

    Get PDF
    Image super-resolution is an area of great interest in recent years and is extensively used in applications like video streaming, multimedia, internet technologies, consumer electronics, display and printing industries. Image super-resolution is a process of increasing the resolution of a given image without losing its integrity. Its most common application is to provide better visual effect after resizing a digital image for display or printing. One of the methods of improving the image resolution is through the employment of a 2-D interpolation. An up-scaled image should retain all the image details with very less degree of blurring meant for better visual quality. In literature, many efficient 2-D interpolation schemes are found that well preserve the image details in the up-scaled images; particularly at the regions with edges and fine details. Nevertheless, these existing interpolation schemes too give blurring effect in the up-scaled images due to the high frequency (HF) degradation during the up-sampling process. Hence, there is a scope to further improve their performance through the incorporation of various spatial domain pre-processing, post-processing and composite algorithms. Therefore, it is felt that there is sufficient scope to develop various efficient but simple pre-processing, post-processing and composite schemes to effectively restore the HF contents in the up-scaled images for various online and off-line applications. An efficient and widely used Lanczos-3 interpolation is taken for further performance improvement through the incorporation of various proposed algorithms. The various pre-processing algorithms developed in this thesis are summarized here. The term pre-processing refers to processing the low-resolution input image prior to image up-scaling. The various pre-processing algorithms proposed in this thesis are: Laplacian of Laplacian based global pre-processing (LLGP) scheme; Hybrid global pre-processing (HGP); Iterative Laplacian of Laplacian based global pre-processing (ILLGP); Unsharp masking based pre-processing (UMP); Iterative unsharp masking (IUM); Error based up-sampling(EU) scheme. The proposed algorithms: LLGP, HGP and ILLGP are three spatial domain preprocessing algorithms which are based on 4th, 6th and 8th order derivatives to alleviate nonuniform blurring in up-scaled images. These algorithms are used to obtain the high frequency (HF) extracts from an image by employing higher order derivatives and perform precise sharpening on a low resolution image to alleviate the blurring in its 2-D up-sampled counterpart. In case of unsharp masking based pre-processing (UMP) scheme, the blurred version of a low resolution image is used for HF extraction from the original version through image subtraction. The weighted version of the HF extracts are superimposed with the original image to produce a sharpened image prior to image up-scaling to counter blurring effectively. IUM makes use of many iterations to generate an unsharp mask which contains very high frequency (VHF) components. The VHF extract is the result of signal decomposition in terms of sub-bands using the concept of analysis filter bank. Since the degradation of VHF components is maximum, restoration of such components would produce much better restoration performance. EU is another pre-processing scheme in which the HF degradation due to image upscaling is extracted and is called prediction error. The prediction error contains the lost high frequency components. When this error is superimposed on the low resolution image prior to image up-sampling, blurring is considerably reduced in the up-scaled images. Various post-processing algorithms developed in this thesis are summarized in following. The term post-processing refers to processing the high resolution up-scaled image. The various post-processing algorithms proposed in this thesis are: Local adaptive Laplacian (LAL); Fuzzy weighted Laplacian (FWL); Legendre functional link artificial neural network(LFLANN). LAL is a non-fuzzy, local based scheme. The local regions of an up-scaled image with high variance are sharpened more than the region with moderate or low variance by employing a local adaptive Laplacian kernel. The weights of the LAL kernel are varied as per the normalized local variance so as to provide more degree of HF enhancement to high variance regions than the low variance counterpart to effectively counter the non-uniform blurring. Furthermore, FWL post-processing scheme with a higher degree of non-linearity is proposed to further improve the performance of LAL. FWL, being a fuzzy based mapping scheme, is highly nonlinear to resolve the blurring problem more effectively than LAL which employs a linear mapping. Another LFLANN based post-processing scheme is proposed here to minimize the cost function so as to reduce the blurring in a 2-D up-scaled image. Legendre polynomials are used for functional expansion of the input pattern-vector and provide high degree of nonlinearity. Therefore, the requirement of multiple layers can be replaced by single layer LFLANN architecture so as to reduce the cost function effectively for better restoration performance. With single layer architecture, it has reduced the computational complexity and hence is suitable for various real-time applications. There is a scope of further improvement of the stand-alone pre-processing and postprocessing schemes by combining them through composite schemes. Here, two spatial domain composite schemes, CS-I and CS-II are proposed to tackle non-uniform blurring in an up-scaled image. CS-I is developed by combining global iterative Laplacian (GIL) preprocessing scheme with LAL post-processing scheme. Another highly nonlinear composite scheme, CS-II is proposed which combines ILLGP scheme with a fuzzy weighted Laplacian post-processing scheme for more improved performance than the stand-alone schemes. Finally, it is observed that the proposed algorithms: ILLGP, IUM, FWL, LFLANN and CS-II are better algorithms in their respective categories for effectively reducing blurring in the up-scaled images

    Scalable Interactive Volume Rendering Using Off-the-shelf Components

    Get PDF
    This paper describes an application of a second generation implementation of the Sepia architecture (Sepia-2) to interactive volu-metric visualization of large rectilinear scalar fields. By employingpipelined associative blending operators in a sort-last configuration a demonstration system with 8 rendering computers sustains 24 to 28 frames per second while interactively rendering large data volumes (1024x256x256 voxels, and 512x512x512 voxels). We believe interactive performance at these frame rates and data sizes is unprecedented. We also believe these results can be extended to other types of structured and unstructured grids and a variety of GL rendering techniques including surface rendering and shadow map-ping. We show how to extend our single-stage crossbar demonstration system to multi-stage networks in order to support much larger data sizes and higher image resolutions. This requires solving a dynamic mapping problem for a class of blending operators that includes Porter-Duff compositing operators

    Design of Novel Algorithm and Architecture for Gaussian Based Color Image Enhancement System for Real Time Applications

    Full text link
    This paper presents the development of a new algorithm for Gaussian based color image enhancement system. The algorithm has been designed into architecture suitable for FPGA/ASIC implementation. The color image enhancement is achieved by first convolving an original image with a Gaussian kernel since Gaussian distribution is a point spread function which smoothen the image. Further, logarithm-domain processing and gain/offset corrections are employed in order to enhance and translate pixels into the display range of 0 to 255. The proposed algorithm not only provides better dynamic range compression and color rendition effect but also achieves color constancy in an image. The design exploits high degrees of pipelining and parallel processing to achieve real time performance. The design has been realized by RTL compliant Verilog coding and fits into a single FPGA with a gate count utilization of 321,804. The proposed method is implemented using Xilinx Virtex-II Pro XC2VP40-7FF1148 FPGA device and is capable of processing high resolution color motion pictures of sizes of up to 1600x1200 pixels at the real time video rate of 116 frames per second. This shows that the proposed design would work for not only still images but also for high resolution video sequences.Comment: 15 pages, 15 figure

    SARA: Self-Aware Resource Allocation for Heterogeneous MPSoCs

    Full text link
    In modern heterogeneous MPSoCs, the management of shared memory resources is crucial in delivering end-to-end QoS. Previous frameworks have either focused on singular QoS targets or the allocation of partitionable resources among CPU applications at relatively slow timescales. However, heterogeneous MPSoCs typically require instant response from the memory system where most resources cannot be partitioned. Moreover, the health of different cores in a heterogeneous MPSoC is often measured by diverse performance objectives. In this work, we propose a Self-Aware Resource Allocation (SARA) framework for heterogeneous MPSoCs. Priority-based adaptation allows cores to use different target performance and self-monitor their own intrinsic health. In response, the system allocates non-partitionable resources based on priorities. The proposed framework meets a diverse range of QoS demands from heterogeneous cores.Comment: Accepted by the 55th annual Design Automation Conference 2018 (DAC'18

    Semantic multimedia remote display for mobile thin clients

    Get PDF
    Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE

    Graphics processing unit accelerating compressed sensing photoacoustic computed tomography with total variation

    Get PDF
    Photoacoustic computed tomography with compressed sensing (CS-PACT) is a commonly used imaging strategy for sparse-sampling PACT. However, it is very time-consuming because of the iterative process involved in the image reconstruction. In this paper, we present a graphics processing unit (GPU)-based parallel computation framework for total-variation-based CS-PACT and adapted into a custom-made PACT system. Specifically, five compute-intensive operators are extracted from the iteration algorithm and are redesigned for parallel performance on a GPU. We achieved an image reconstruction speed 24–31 times faster than the CPU performance. We performed in vivo experiments on human hands to verify the feasibility of our developed method
    corecore