1,735 research outputs found

    A Gray-Level Dynamic Range Modification Technique for Image Feature Extraction Using Fuzzy Membership Function

    Full text link
    The features of an image must be unique so it is necessary to use certain techniques to ensure them. One of the common techniques is to modify the gray dynamic range of an image. In principle, the gray level dynamic range modification maps the gray level ranges from the input image to the new gray level range as an output image using a specific function. Fuzzy Membership Function (MF) is one kind of membership function that applies the Fuzzy Logic concept. This study uses Trapezoidal MF to map the gray dynamic range of each RGB component to produce a feature of an RGB image. The aim of this study is how to ensure the uniqueness of image features through the setting of Trapezoidal MF parameters to obtain the new dynamic range of gray levels that minimize the possibility of other features other than the selected feature. To test the performance of the proposed method, it also tries to be applied to the signature image. Mean Absolute Error (MAE) calculations between feature labels are performed to test authentication between signatures. The results obtained are for comparison of samples of signature images derived from the same source having a much smaller MAE than the comparison of samples of signature images originating from different sources

    An Efficient Recurrent Adversarial Framework for Unsupervised Real-Time Video Enhancement

    Full text link
    Video enhancement is a challenging problem, more than that of stills, mainly due to high computational cost, larger data volumes and the difficulty of achieving consistency in the spatio-temporal domain. In practice, these challenges are often coupled with the lack of example pairs, which inhibits the application of supervised learning strategies. To address these challenges, we propose an efficient adversarial video enhancement framework that learns directly from unpaired video examples. In particular, our framework introduces new recurrent cells that consist of interleaved local and global modules for implicit integration of spatial and temporal information. The proposed design allows our recurrent cells to efficiently propagate spatio-temporal information across frames and reduces the need for high complexity networks. Our setting enables learning from unpaired videos in a cyclic adversarial manner, where the proposed recurrent units are employed in all architectures. Efficient training is accomplished by introducing one single discriminator that learns the joint distribution of source and target domain simultaneously. The enhancement results demonstrate clear superiority of the proposed video enhancer over the state-of-the-art methods, in all terms of visual quality, quantitative metrics, and inference speed. Notably, our video enhancer is capable of enhancing over 35 frames per second of FullHD video (1080x1920)

    Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for Loss-free Multi-Exposure Image Fusion

    Full text link
    Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels. Despite its advancements, the field grapples with challenges, notably the reliance on manual designs for network structures and loss functions, and the constraints of utilizing simulated reference images as ground truths. Consequently, current methodologies often suffer from color distortions and exposure artifacts, further complicating the quest for authentic image representation. In addressing these challenges, this paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions. More specifically, we harnesses a unique dual research mechanism rooted in a novel weighted structure refinement architecture search. Besides, a hybrid supervised contrast constraint seamlessly guides and integrates with searching process, facilitating a more adaptive and comprehensive search for optimal loss functions. We realize the state-of-the-art performance in comparison to various competitive schemes, yielding a 10.61% and 4.38% improvement in Visual Information Fidelity (VIF) for general and no-reference scenarios, respectively, while providing results with high contrast, rich details and colors
    • …
    corecore