1,193 research outputs found

    Steered mixture-of-experts for light field images and video : representation and coding

    Get PDF
    Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution

    Compression of spectral meteorological imagery

    Get PDF
    Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients

    Optical-logic-array processor using shadowgrams. II. Optical Parallel digital image processing

    Full text link
    This paper was published in Journal of the Optical Society of America A and is made available as an electronic reprint with the permission of OSA. The paper can be found at the following URL on the OSA website: http://dx.doi.org/10.1364/JOSAA.2.001237 Systematic or multiple reproduction or distribution to multiple locations via electronic or other means is prohibited and is subject to penalties under law

    A field programmable gate array based modular motion control platform

    Get PDF
    The expectations from motion control systems have been rising day by day. As the systems become more complex, conventional motion control systems can not achieve to meet all the specifications with optimized results. This creates the necessity of fundamental changes in the infrastructure of the system. Field programmable gate array (FPGA) technology enables the reconfiguration of the digital hardware, thus dissolving the necessity of infrastructural changes for minor manipulations in the hardware even if the system is deployed. An FPGA based hardware system shrinks the size of the hardware hence the cost. FPGAs also provide better power ratings for the systems as well as a more reliable system with improved performance. As a trade off, the development is rather more difficult than software based systems, which also affects the research and development time of the overall system. In this paper a level of abstraction is introduced in order to diminish the requirement of advanced hardware description language (HDL) knowledge for implementing motion control systems thoroughly on an FPGA. The intellectual property library consists of synthesizable hardware modules specifically implemented for motion control purposes. Other parts of a motion control system, like user interface and trajectory generation, are implemented as software functions in order to protect the modularity of the system. There are also several external hardware designs for interfacing and driving various types of actuators

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    3-D Mesh geometry compression with set partitioning in the spectral domain

    Get PDF
    This paper explains the development of a highly efficient progressive 3-D mesh geometry coder based on the region adaptive transform in the spectral mesh compression method. A hierarchical set partitioning technique, originally proposed for the efficient compression of wavelet transform coefficients in high-performance wavelet-based image coding methods, is proposed for the efficient compression of the coefficients of this transform. Experiments confirm that the proposed coder employing such a region adaptive transform has a high compression performance rarely achieved by other state of the art 3-D mesh geometry compression algorithms. A new, high-performance fixed spectral basis method is also proposed for reducing the computational complexity of the transform. Many-to-one mappings are employed to relate the coded irregular mesh region to a regular mesh whose basis is used. To prevent loss of compression performance due to the low-pass nature of such mappings, transitions are made from transform-based coding to spatial coding on a per region basis at high coding rates. Experimental results show the performance advantage of the newly proposed fixed spectral basis method over the original fixed spectral basis method in the literature that employs one-to-one mappings.This work was supported in part by the Scientific and Technological Research Council of Turkey, and conducted under Project 106E064Publisher's Versio

    The Space and Earth Science Data Compression Workshop

    Get PDF
    This document is the proceedings from a Space and Earth Science Data Compression Workshop, which was held on March 27, 1992, at the Snowbird Conference Center in Snowbird, Utah. This workshop was held in conjunction with the 1992 Data Compression Conference (DCC '92), which was held at the same location, March 24-26, 1992. The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The workshop consisted of eleven papers presented in four sessions. These papers describe research that is integrated into, or has the potential of being integrated into, a particular space and/or Earth science data information system. Presenters were encouraged to take into account the scientists's data requirements, and the constraints imposed by the data collection, transmission, distribution, and archival system

    Enhancement of Digital Photo Frame Capabilities With Dedicated Hardware

    Get PDF
    Photo frames have come a long way since the typical ones that needed to have a photo printed and stuck on them. Today in this digital era we have a new concept, named digital photo frame, a modern representation of the conventional photo frame. A digital photo frame is basically a picture frame that displays photos without the need to print them. They are available in a variety of sizes and with varied configurations. A typical frame varies in size from 7 inches to 20 inches. There are also key chain sized frames available. These frames also support a variety of formats like .jpeg, .tiff, .bmp and so on. Most of the frames provide an option to run the photos in a sequential or random manner as a slideshow with an adjustable time interval. The mode of input of the photos to the frame is also multi-fold. It can be done directly via the memory card of the camera, or else various memory devices like USB drives, SD Cards, MMC Cards and so on can be used. Nowadays even Bluetooth technology is being used. Another option that is becoming quite popular is that, users can take their photos directly from the Internet from sites like Flickr, Picassa or from their e-mail. Also these frames generally come with built in speakers and with remote controls. Our initial objective was to decide on which all features can be added to the Digital Photo Frame that we design. For this purpose we conducted simulation exercises in MATLAB so as to prove its feasibility. This simulation exercise was divided into two parts. The first part was to perform compression and decompression and the second half dealt with the various enhancements that can be added to the frame. For our compression and decompression we considered the JPEG standard. Joint Photographic Experts Group - an ISO/ITU standard for compressing still images. The JPEG format is very popular due to its variable compression range. A few limitations of JPEG include the fact that it is lossy and also not great for displaying text. The common extension for it include *.jpg, *.jff, *.m-jpeg,*.mpeg The various enhancement features that we tested for feasibility include Mean Filter, Median Filter, Image Sharpening, Negative Image Extraction, Logarithmic Transformations, Power Law Correction (Gamma Correction), Contrast Stretching, Grey Level Slicing, Bit Plane Slicing, Laplace Filtering. We then proceeded onto the hardware implementation of the above said features. We only implemented a handful of features owing to the complexity of design and lack of time. We first implemented the Compression and Decompression algorithm. The two enhancement features we implemented were Laplace Filter and Median Filter. For our implementation we used the VIRTEX 2 FPGA Board

    A Fully Scalable Video Coder with Inter-Scale Wavelet Prediction and Morphological Coding

    Get PDF
    In this paper a new fully scalable - wavelet based - video coding architecture is proposed, where motion compensated temporal filtered subbands of spatially scaled versions of a video sequence can be used as base layer for inter-scale predictions. These predictions take place between data at the same resolution level without the need of interpolation. The prediction residuals are further transformed by spatial wavelet decompositions. The resulting multi-scale spatiotemporal wavelet subbands are coded thanks to an embedded morphological dilation technique and context based arithmetic coding. Dyadic spatio-temporal scalability and progressive SNR scalability are achieved. Multiple adaptation decoding can be easily implemented without the need of knowing a predefined set of operating points. The proposed coding system allows to compensate some of the typical drawbacks of current wavelet based scalable video coding architectures and shows interesting visual results even when compared with the single operating point video coding standard AVC/H.264
    corecore