6,818 research outputs found

    A Convex Model for Edge-Histogram Specification with Applications to Edge-preserving Smoothing

    Full text link
    The goal of edge-histogram specification is to find an image whose edge image has a histogram that matches a given edge-histogram as much as possible. Mignotte has proposed a non-convex model for the problem [M. Mignotte. An energy-based model for the image edge-histogram specification problem. IEEE Transactions on Image Processing, 21(1):379--386, 2012]. In his work, edge magnitudes of an input image are first modified by histogram specification to match the given edge-histogram. Then, a non-convex model is minimized to find an output image whose edge-histogram matches the modified edge-histogram. The non-convexity of the model hinders the computations and the inclusion of useful constraints such as the dynamic range constraint. In this paper, instead of considering edge magnitudes, we directly consider the image gradients and propose a convex model based on them. Furthermore, we include additional constraints in our model based on different applications. The convexity of our model allows us to compute the output image efficiently using either Alternating Direction Method of Multipliers or Fast Iterative Shrinkage-Thresholding Algorithm. We consider several applications in edge-preserving smoothing including image abstraction, edge extraction, details exaggeration, and documents scan-through removal. Numerical results are given to illustrate that our method successfully produces decent results efficiently

    Empirical Study of Car License Plates Recognition

    Get PDF
    The number of vehicles on the road has increased drastically in recent years. The license plate is an identity card for a vehicle. It can map to the owner and further information about vehicle. License plate information is useful to help traffic management systems. For example, traffic management systems can check for vehicles moving at speeds not permitted by law and can also be installed in parking areas to se-cure the entrance or exit way for vehicles. License plate recognition algorithms have been proposed by many researchers. License plate recognition requires license plate detection, segmentation, and charac-ters recognition. The algorithm detects the position of a license plate and extracts the characters. Various license plate recognition algorithms have been implemented, and each algorithm has its strengths and weaknesses. In this research, I implement three algorithms for detecting license plates, three algorithms for segmenting license plates, and two algorithms for recognizing license plate characters. I evaluate each of these algorithms on the same two datasets, one from Greece and one from Thailand. For detecting li-cense plates, the best result is obtained by a Haar cascade algorithm. After the best result of license plate detection is obtained, for the segmentation part a Laplacian based method has the highest accuracy. Last, the license plate recognition experiment shows that a neural network has better accuracy than other algo-rithm. I summarize and analyze the overall performance of each method for comparison

    Exact Histogram Specification Optimized for Structural Similarity

    Full text link
    An exact histogram specification (EHS) method modifies its input image to have a specified histogram. Applications of EHS include image (contrast) enhancement (e.g., by histogram equalization) and histogram watermarking. Performing EHS on an image, however, reduces its visual quality. Starting from the output of a generic EHS method, we maximize the structural similarity index (SSIM) between the original image (before EHS) and the result of EHS iteratively. Essential in this process is the computationally simple and accurate formula we derive for SSIM gradient. As it is based on gradient ascent, the proposed EHS always converges. Experimental results confirm that while obtaining the histogram exactly as specified, the proposed method invariably outperforms the existing methods in terms of visual quality of the result. The computational complexity of the proposed method is shown to be of the same order as that of the existing methods. Index terms: histogram modification, histogram equalization, optimization for perceptual visual quality, structural similarity gradient ascent, histogram watermarking, contrast enhancement

    Simultaneous multislice acquisition with multi-contrast segmented EPI for separation of signal contributions in dynamic contrast-enhanced imaging

    Get PDF
    We present a method to efficiently separate signal in magnetic resonance imaging (MRI) into a base signal S0, representing the mainly T1-weighted component without T2*-relaxation, and its T2*-weighted counterpart by the rapid acquisition of multiple contrasts for advanced pharmacokinetic modelling. This is achieved by incorporating simultaneous multislice (SMS) imaging into a multi-contrast, segmented echo planar imaging (EPI) sequence to allow extended spatial coverage, which covers larger body regions without time penalty. Simultaneous acquisition of four slices was combined with segmented EPI for fast imaging with three gradient echo times in a preclinical perfusion study. Six female domestic pigs, German-landrace or hybrid-form, were scanned for 11 minutes respectively during administration of gadolinium-based contrast agent. Influences of reconstruction methods and training data were investigated. The separation into T1- and T2*-dependent signal contributions was achieved by fitting a standard analytical model to the acquired multi-echo data. The application of SMS yielded sufficient temporal resolution for the detection of the arterial input function in major vessels, while anatomical coverage allowed perfusion analysis of muscle tissue. The separation of the MR signal into T1- and T2*-dependent components allowed the correction of susceptibility related changes. We demonstrate a novel sequence for dynamic contrast-enhanced MRI that meets the requirements of temporal resolution (Δt < 1.5 s) and image quality. The incorporation of SMS into multi-contrast, segmented EPI can overcome existing limitations of dynamic contrast enhancement and dynamic susceptibility contrast methods, when applied separately. The new approach allows both techniques to be combined in a single acquisition with a large spatial coverage

    DeepOtsu: Document Enhancement and Binarization using Iterative Deep Learning

    Get PDF
    This paper presents a novel iterative deep learning framework and apply it for document enhancement and binarization. Unlike the traditional methods which predict the binary label of each pixel on the input image, we train the neural network to learn the degradations in document images and produce the uniform images of the degraded input images, which allows the network to refine the output iteratively. Two different iterative methods have been studied in this paper: recurrent refinement (RR) which uses the same trained neural network in each iteration for document enhancement and stacked refinement (SR) which uses a stack of different neural networks for iterative output refinement. Given the learned uniform and enhanced image, the binarization map can be easy to obtain by a global or local threshold. The experimental results on several public benchmark data sets show that our proposed methods provide a new clean version of the degraded image which is suitable for visualization and promising results of binarization using the global Otsu's threshold based on the enhanced images learned iteratively by the neural network.Comment: Accepted by Pattern Recognitio

    Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    Get PDF
    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm

    Local Contrast Enhancement Utilizing Bidirectional Switching Equalization Of Separated And Clipped Sub-Histograms

    Get PDF
    Digital image contrast enhancement methods that are based on histogram equalization (HE) technique are useful for the use in consumer electronic products due to their simple implementation. However, almost all the suggested enhancement methods are using global processing technique, which does not emphasize local contents. Kaedah penyerlahan beza jelas imej digit berdasarkan teknik penyeragaman histogram adalah berguna dalam penggunaan produk elektronik pengguna disebabkan pelaksanaan yang mudah. Walau bagaimanapun, kebanyakan kaedah penyerlahan yang dicadangkan adalah menggunakan teknik proses sejagat dan tidak menekan kepada kandungan setempat

    Enhancement of dronogram aid to visual interpretation of target objects via intuitionistic fuzzy hesitant sets

    Get PDF
    In this paper, we address the hesitant information in enhancement task often caused by differences in image contrast. Enhancement approaches generally use certain filters which generate artifacts or are unable to recover all the objects details in images. Typically, the contrast of an image quantifies a unique ratio between the amounts of black and white through a single pixel. However, contrast is better represented by a group of pix- els. We have proposed a novel image enhancement scheme based on intuitionistic hesi- tant fuzzy sets (IHFSs) for drone images (dronogram) to facilitate better interpretations of target objects. First, a given dronogram is divided into foreground and background areas based on an estimated threshold from which the proposed model measures the amount of black/white intensity levels. Next, we fuzzify both of them and determine the hesitant score indicated by the distance between the two areas for each point in the fuzzy plane. Finally, a hyperbolic operator is adopted for each membership grade to improve the pho- tographic quality leading to enhanced results via defuzzification. The proposed method is tested on a large drone image database. Results demonstrate better contrast enhancement, improved visual quality, and better recognition compared to the state-of-the-art methods.Web of Science500866
    corecore