802 research outputs found

    Fast Color Space Transformations Using Minimax Approximations

    Full text link
    Color space transformations are frequently used in image processing, graphics, and visualization applications. In many cases, these transformations are complex nonlinear functions, which prohibits their use in time-critical applications. In this paper, we present a new approach called Minimax Approximations for Color-space Transformations (MACT).We demonstrate MACT on three commonly used color space transformations. Extensive experiments on a large and diverse image set and comparisons with well-known multidimensional lookup table interpolation methods show that MACT achieves an excellent balance among four criteria: ease of implementation, memory usage, accuracy, and computational speed

    FastLLVE: Real-Time Low-Light Video Enhancement with Intensity-Aware Lookup Table

    Full text link
    Low-Light Video Enhancement (LLVE) has received considerable attention in recent years. One of the critical requirements of LLVE is inter-frame brightness consistency, which is essential for maintaining the temporal coherence of the enhanced video. However, most existing single-image-based methods fail to address this issue, resulting in flickering effect that degrades the overall quality after enhancement. Moreover, 3D Convolution Neural Network (CNN)-based methods, which are designed for video to maintain inter-frame consistency, are computationally expensive, making them impractical for real-time applications. To address these issues, we propose an efficient pipeline named FastLLVE that leverages the Look-Up-Table (LUT) technique to maintain inter-frame brightness consistency effectively. Specifically, we design a learnable Intensity-Aware LUT (IA-LUT) module for adaptive enhancement, which addresses the low-dynamic problem in low-light scenarios. This enables FastLLVE to perform low-latency and low-complexity enhancement operations while maintaining high-quality results. Experimental results on benchmark datasets demonstrate that our method achieves the State-Of-The-Art (SOTA) performance in terms of both image quality and inter-frame brightness consistency. More importantly, our FastLLVE can process 1,080p videos at 50+\mathit{50+} Frames Per Second (FPS), which is 2Ɨ\mathit{2 \times} faster than SOTA CNN-based methods in inference time, making it a promising solution for real-time applications. The code is available at https://github.com/Wenhao-Li-777/FastLLVE.Comment: 11pages, 9 Figures, and 6 Tables. Accepted by ACMMM 202

    Implementing an ICC printer profile visualization software

    Get PDF
    Device color gamut plays a crucial role in ICC-based color management systems. Accurately visualizing a device\u27s gamut boundary is important in the analysis of color conversion and gamut mapping. ICC profiles contain all the information which can be used to better understand the capabilities of the device. This thesis project has implemented a printer profile visualization software. The project uses A2B 1 tag in a printer profile as gamut data source, then renders gamut of device the profile represents in CIELAB space with a convex hull algorithm. Gamut can be viewed interactively from any view points. The software also gets the gamut data set using CMM with different intent to do color conversion from a specified printer profile to a generic lab profile (short for A2B conversion) or from a generic CIELAB profile to a specified printer pro file and back to the generic CIELAB profile (short for B2A2B). Gamut can be rendered as points, wire frame or solid surface. Two-dimension a*b* and L*C* gamut slice analytic tools were also developed. The 2D gamut slice algorithm is based on dividing gamut into small sections according to lightness and hue angle. The point with maximum chroma on each section can be used to present a*b* gamut slice on a constant lightness plane or L*C* gamut slice on a constant hue angle plane. Gamut models from two or more device profiles can be viewed in the same window. Through the comparison, we can better understand the device reproduction capacities and proofing problems. This thesis also explained printer profile in details, and examined what gamut data source was the best for gamut visualization. At the same time, some gamut boundary descriptor algorithms were discussed. Convex hull algorithm and device space to CIELAB space mapping algorithm were chosen to render 3D gamut in this thesis project. Finally, an experiment was developed to validate the gamut data generated from the software. The experiment used the same method with profile visualization software to get gamut data set source from Photoshop 6.0. The results of the experiment were showed that the data set derived from visualization software was consistent with those from Photoshop 6.0

    A Large Panel Two-CCD Camera Coordinate System with an Alternate-Eight-Matrix Look-Up Table Algorithm

    Get PDF
    AbstractIn this study, a novel positioning model of a double-CCD cameras calibration system, with an Alternate-Eight-Matrix (AEM) Look-Up-Table (LUT), was proposed. Two CCD cameras were fixed on both sides of a large scale screen to redeem Field Of View (FOV) problems. The first to the fourth AEMLUT were used to compute the corresponding positions of intermediate blocks on the screen captured by the right side camera. In these AEMLUT for the right side camera, the coordinate mapping data of the target in a specific space were stored in two matrixes, while the gray level threshold values of different position were stored in the others. Similarly, the fifth to the eighth AEMLUT were used to compute the corresponding positions of intermediate blocks on the screen captured by the left side camera. Experimental results showed that the problems of dead angles and non-uniform light fields were solved. In addition, rapid and precision positioning results can be obtained by the proposed method

    Increasing the Performance of the Canadian Hydrological Model using Lookup Tables

    Get PDF
    The climate of cold regions is fragile and could be easily threatened by human activities. Hydrological processes play an important role in the climate of cold regions, and using computational models to simulate cold-region hydrological processes helps people understand past hydrological events and predict future ones. With the need to get more accurate simulation results, more complex computational models are often required. However, the complexity of models is often limited by available computational resources. Therefore, improving the computational eļ¬ƒciency of model simulations is an urgent task for hydrological researchers and software developers. The Canadian Hydrological Model (CHM) is a modular software package that is used to simulate cold-region hydrological processes. CHM uses an eļ¬ƒcient surface discretization, unstructured triangular meshes, to reduce the number of discretization elements, which in turn decreases the complexity of cold-region hydrological models. CHM also employs parallelization to make models more eļ¬ƒcient. By proļ¬ling the performance of CHM, we ļ¬nd that there are some computationally intensive functions inside CHM that are evaluated repeatedly. Lookup tables (LUTs) followed by optional interpolation or Taylor series approximation are common optimizations to replace such direct function evaluations. These optimizations can decrease the complexity of cold-region hydrological models further. The Function Comparator (FunC) is a C++ library that can automatically create one-dimensional LUTs for continuous univariate functions on uniformly spaced grids. In this thesis, we use FunC to implement LUTs for two computationally intensive and repeatedly called functions in CHM, achieving an improvement of around 20% in the performance of CHM in the sense of running time on two cold-region hydrological simulations. In the ļ¬rst step, we identify two computationally intensive and repeatedly called functions by proļ¬ling the performance of CHM, determine the error tolerances and the ranges of inputs for their LUT implementations, and use FunC to implement linear interpolation LUTs for both functions in CHM. In the second step, we run CHM with and without LUT implementations on a coldregion hydrological simulation with a small domain. We verify that CHM with LUT implementations produces correct output and show that there is around an 18% improvement in the performance of CHM. In the third step, we run the same CHM with and without LUT implementations on a cold-region hydrological simulation with a large domain. We again verify that CHM with LUT implementations produces correct output and show that there is around a 21% improvement in the performance of CHM

    A Study of Colour Rendering in the In-Camera Imaging Pipeline

    Get PDF
    Consumer cameras such as digital single-lens reflex camera (DSLR) and smartphone cameras have onboard hardware that applies a series of processing steps to transform the initial captured raw sensor image to the final output image that is provided to the user. These processing steps collectively make up the in-camera image processing pipeline. This dissertation aims to study the processing steps related to colour rendering which can be categorized into two stages. The first stage is to convert an image's sensor-specific raw colour space to a device-independent perceptual colour space. The second stage is to further process the image into a display-referred colour space and includes photo-finishing routines to make the image appear visually pleasing to a human. This dissertation makes four contributions towards the study of camera colour rendering. The first contribution is the development of a software-based research platform that closely emulates the in-camera image processing pipeline hardware. This platform allows the examination of the various image states of the captured image as it is processed from the sensor response to the final display output. Our second contribution is to demonstrate the advantage of having access to intermediate image states within the in-camera pipeline that provide more accurate colourimetric consistency among multiple cameras. Our third contribution is to analyze the current colourimetric method used by consumer cameras and to propose a modification that is able to improve its colour accuracy. Our fourth contribution is to describe how to customize a camera imaging pipeline using machine vision cameras to produce high-quality perceptual images for dermatological applications. The dissertation concludes with a summary and future directions

    AOIPS 3 user's guide. Volume 2: Program descriptions

    Get PDF
    The Atmospheric and Oceanographic Information Processing System (AOIPS) 3 is the version of the AOIPS software as of April 1989. The AOIPS software was developed jointly by the Goddard Space Flight Center and General Sciences Corporation. A detailed description of very AOIPS program is presented. It is intended to serve as a reference for such items as program functionality, program operational instructions, and input/output variable descriptions. Program descriptions are derived from the on-line help information. Each program description is divided into two sections. The functional description section describes the purpose of the program and contains any pertinent operational information. The program description sections lists the program variables as they appear on-line, and describes them in detail

    Video enhancement : content classification and model selection

    Get PDF
    The purpose of video enhancement is to improve the subjective picture quality. The field of video enhancement includes a broad category of research topics, such as removing noise in the video, highlighting some specified features and improving the appearance or visibility of the video content. The common difficulty in this field is how to make images or videos more beautiful, or subjectively better. Traditional approaches involve lots of iterations between subjective assessment experiments and redesigns of algorithm improvements, which are very time consuming. Researchers have attempted to design a video quality metric to replace the subjective assessment, but so far it is not successful. As a way to avoid heuristics in the enhancement algorithm design, least mean square methods have received considerable attention. They can optimize filter coefficients automatically by minimizing the difference between processed videos and desired versions through a training. However, these methods are only optimal on average but not locally. To solve the problem, one can apply the least mean square optimization for individual categories that are classified by local image content. The most interesting example is Kondoā€™s concept of local content adaptivity for image interpolation, which we found could be generalized into an ideal framework for content adaptive video processing. We identify two parts in the concept, content classification and adaptive processing. By exploring new classifiers for the content classification and new models for the adaptive processing, we have generalized a framework for more enhancement applications. For the part of content classification, new classifiers have been proposed to classify different image degradations such as coding artifacts and focal blur. For the coding artifact, a novel classifier has been proposed based on the combination of local structure and contrast, which does not require coding block grid detection. For the focal blur, we have proposed a novel local blur estimation method based on edges, which does not require edge orientation detection and shows more robust blur estimation. With these classifiers, the proposed framework has been extended to coding artifact robust enhancement and blur dependant enhancement. With the content adaptivity to more image features, the number of content classes can increase significantly. We show that it is possible to reduce the number of classes without sacrificing much performance. For the part of model selection, we have introduced several nonlinear filters to the proposed framework. We have also proposed a new type of nonlinear filter, trained bilateral filter, which combines both advantages of the original bilateral filter and the least mean square optimization. With these nonlinear filters, the proposed framework show better performance than with linear filters. Furthermore, we have shown a proof-of-concept for a trained approach to obtain contrast enhancement by a supervised learning. The transfer curves are optimized based on the classification of global or local image content. It showed that it is possible to obtain the desired effect by learning from other computationally expensive enhancement algorithms or expert-tuned examples through the trained approach. Looking back, the thesis reveals a single versatile framework for video enhancement applications. It widens the application scope by including new content classifiers and new processing models and offers scalabilities with solutions to reduce the number of classes, which can greatly accelerate the algorithm design
    • ā€¦
    corecore