1,488 research outputs found

    Design of a High-Speed Architecture for Stabilization of Video Captured Under Non-Uniform Lighting Conditions

    Get PDF
    Video captured in shaky conditions may lead to vibrations. A robust algorithm to immobilize the video by compensating for the vibrations from physical settings of the camera is presented in this dissertation. A very high performance hardware architecture on Field Programmable Gate Array (FPGA) technology is also developed for the implementation of the stabilization system. Stabilization of video sequences captured under non-uniform lighting conditions begins with a nonlinear enhancement process. This improves the visibility of the scene captured from physical sensing devices which have limited dynamic range. This physical limitation causes the saturated region of the image to shadow out the rest of the scene. It is therefore desirable to bring back a more uniform scene which eliminates the shadows to a certain extent. Stabilization of video requires the estimation of global motion parameters. By obtaining reliable background motion, the video can be spatially transformed to the reference sequence thereby eliminating the unintended motion of the camera. A reflectance-illuminance model for video enhancement is used in this research work to improve the visibility and quality of the scene. With fast color space conversion, the computational complexity is reduced to a minimum. The basic video stabilization model is formulated and configured for hardware implementation. Such a model involves evaluation of reliable features for tracking, motion estimation, and affine transformation to map the display coordinates of a stabilized sequence. The multiplications, divisions and exponentiations are replaced by simple arithmetic and logic operations using improved log-domain computations in the hardware modules. On Xilinx\u27s Virtex II 2V8000-5 FPGA platform, the prototype system consumes 59% logic slices, 30% flip-flops, 34% lookup tables, 35% embedded RAMs and two ZBT frame buffers. The system is capable of rendering 180.9 million pixels per second (mpps) and consumes approximately 30.6 watts of power at 1.5 volts. With a 1024Ă—1024 frame, the throughput is equivalent to 172 frames per second (fps). Future work will optimize the performance-resource trade-off to meet the specific needs of the applications. It further extends the model for extraction and tracking of moving objects as our model inherently encapsulates the attributes of spatial distortion and motion prediction to reduce complexity. With these parameters to narrow down the processing range, it is possible to achieve a minimum of 20 fps on desktop computers with Intel Core 2 Duo or Quad Core CPUs and 2GB DDR2 memory without a dedicated hardware

    Perceptually-Driven Video Coding with the Daala Video Codec

    Full text link
    The Daala project is a royalty-free video codec that attempts to compete with the best patent-encumbered codecs. Part of our strategy is to replace core tools of traditional video codecs with alternative approaches, many of them designed to take perceptual aspects into account, rather than optimizing for simple metrics like PSNR. This paper documents some of our experiences with these tools, which ones worked and which did not. We evaluate which tools are easy to integrate into a more traditional codec design, and show results in the context of the codec being developed by the Alliance for Open Media.Comment: 19 pages, Proceedings of SPIE Workshop on Applications of Digital Image Processing (ADIP), 201

    Rate-distortion optimized geometrical image processing

    Get PDF
    Since geometrical features, like edges, represent one of the most important perceptual information in an image, efficient exploitation of such geometrical information is a key ingredient of many image processing tasks, including compression, denoising and feature extraction. Therefore, the challenge for the image processing community is to design efficient geometrical schemes which can capture the intrinsic geometrical structure of natural images. This thesis focuses on developing computationally efficient tree based algorithms for attaining the optimal rate-distortion (R-D) behavior for certain simple classes of geometrical images, such as piecewise polynomial images with polynomial boundaries. A good approximation of this class allows to develop good approximation and compression schemes for images with strong geometrical features, and as experimental results show, also for real life images. We first investigate both the one dimensional (1-D) and two dimensional (2-D) piecewise polynomials signals. For the 1-D case, our scheme is based on binary tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly and is called prune-join algorithm. This allows to achieve the correct exponentially decaying R-D behavior, D(R) ~ 2-cR, thus improving over classical wavelet schemes. We also show that the computational complexity of the scheme is of O(N logN). We then extend this scheme to the 2-D case using a quadtree, which also achieves an exponentially decaying R-D behavior, for the piecewise polynomial image model, with a low computational cost of O(N logN). Again, the key is an R-D optimized prune and join strategy. We further analyze the R-D performance of the proposed tree algorithms for piecewise smooth signals. We show that the proposed algorithms achieve the oracle like polynomially decaying asymptotic R-D behavior for both the 1-D and 2-D scenarios. Theoretical as well as numerical results show that the proposed schemes outperform wavelet based coders in the 2-D case. We then consider two interesting image processing problems, namely denoising and stereo image compression, in the framework of the tree structured segmentation. For the denoising problem, we present a tree based algorithm which performs denoising by compressing the noisy image and achieves improved visual quality by capturing geometrical features, like edges, of images more precisely compared to wavelet based schemes. We then develop a novel rate-distortion optimized disparity based coding scheme for stereo images. The main novelty of the proposed algorithm is that it performs the joint coding of disparity information and the residual image to achieve better R-D performance in comparison to standard block based stereo image coder

    Editable View Optimized Tone Mapping For Viewing High Dynamic Range Panoramas On Head Mounted Display

    Get PDF
    Head mounted displays are characterized by relatively low resolution and low dynamic range. These limitations significantly reduce the visual quality of photo-realistic captures on such displays. This thesis presents an interactive view optimized tone mapping technique for viewing large sized high dynamic range panoramas up to 16384 by 8192 on head mounted displays. This technique generates a separate file storing pre-computed view-adjusted mapping function parameters. We define this technique as ToneTexture. The use of a view adjusted tone mapping allows for expansion of the perceived color space available to the end user. This yields an improved visual appearance of both high dynamic range panoramas and low dynamic range panoramas on such displays. Moreover, by providing proper interface to manipulate on ToneTexture, users are allowed to adjust the mapping function as to changing color emphasis. The authors present comparisons of the results produced by ToneTexture technique against widely-used Reinhard tone mapping operator and Filmic tone mapping operator both objectively via a mathematical quality assessment metrics and subjectively through user study. Demonstration systems are available for desktop and head mounted displays such as Oculus Rift and GearVR

    Automated Complexity-Sensitive Image Fusion

    Get PDF
    To construct a complete representation of a scene with environmental obstacles such as fog, smoke, darkness, or textural homogeneity, multisensor video streams captured in diferent modalities are considered. A computational method for automatically fusing multimodal image streams into a highly informative and unified stream is proposed. The method consists of the following steps: 1. Image registration is performed to align video frames in the visible band over time, adapting to the nonplanarity of the scene by automatically subdividing the image domain into regions approximating planar patches 2. Wavelet coefficients are computed for each of the input frames in each modality 3. Corresponding regions and points are compared using spatial and temporal information across various scales 4. Decision rules based on the results of multimodal image analysis are used to combine thewavelet coefficients from different modalities 5. The combined wavelet coefficients are inverted to produce an output frame containing useful information gathered from the available modalities Experiments show that the proposed system is capable of producing fused output containing the characteristics of color visible-spectrum imagery while adding information exclusive to infrared imagery, with attractive visual and informational properties

    Lossy Depth Image Compression using Greedy Rate-Distortion Slope Optimization

    Full text link
    • …
    corecore