9,368 research outputs found

    Acceleration of Histogram-Based Contrast Enhancement via Selective Downsampling

    Full text link
    In this paper, we propose a general framework to accelerate the universal histogram-based image contrast enhancement (CE) algorithms. Both spatial and gray-level selective down- sampling of digital images are adopted to decrease computational cost, while the visual quality of enhanced images is still preserved and without apparent degradation. Mapping function calibration is novelly proposed to reconstruct the pixel mapping on the gray levels missed by downsampling. As two case studies, accelerations of histogram equalization (HE) and the state-of-the-art global CE algorithm, i.e., spatial mutual information and PageRank (SMIRANK), are presented detailedly. Both quantitative and qualitative assessment results have verified the effectiveness of our proposed CE acceleration framework. In typical tests, computational efficiencies of HE and SMIRANK have been speeded up by about 3.9 and 13.5 times, respectively.Comment: accepted by IET Image Processin

    Color image segmentation using a self-initializing EM algorithm

    Get PDF
    This paper presents a new method based on the Expectation-Maximization (EM) algorithm that we apply for color image segmentation. Since this algorithm partitions the data based on an initial set of mixtures, the color segmentation provided by the EM algorithm is highly dependent on the starting condition (initialization stage). Usually the initialization procedure selects the color seeds randomly and often this procedure forces the EM algorithm to converge to numerous local minima and produce inappropriate results. In this paper we propose a simple and yet effective solution to initialize the EM algorithm with relevant color seeds. The resulting self initialised EM algorithm has been included in the development of an adaptive image segmentation scheme that has been applied to a large number of color images. The experimental data indicates that the refined initialization procedure leads to improved color segmentation

    Combining Contrast Invariant L1 Data Fidelities with Nonlinear Spectral Image Decomposition

    Get PDF
    This paper focuses on multi-scale approaches for variational methods and corresponding gradient flows. Recently, for convex regularization functionals such as total variation, new theory and algorithms for nonlinear eigenvalue problems via nonlinear spectral decompositions have been developed. Those methods open new directions for advanced image filtering. However, for an effective use in image segmentation and shape decomposition, a clear interpretation of the spectral response regarding size and intensity scales is needed but lacking in current approaches. In this context, L1L^1 data fidelities are particularly helpful due to their interesting multi-scale properties such as contrast invariance. Hence, the novelty of this work is the combination of L1L^1-based multi-scale methods with nonlinear spectral decompositions. We compare L1L^1 with L2L^2 scale-space methods in view of spectral image representation and decomposition. We show that the contrast invariant multi-scale behavior of L1TVL^1-TV promotes sparsity in the spectral response providing more informative decompositions. We provide a numerical method and analyze synthetic and biomedical images at which decomposition leads to improved segmentation.Comment: 13 pages, 7 figures, conference SSVM 201

    Contrast enhancement using grey scale transformation techniques

    Get PDF
    The object of this thesis has been to examine grey scale transformation techniques in order to incorporate them into a system for automatically selecting a technique to enhance the contrast in a given image. In order to include existing techniques in the system it was necessary to examine each in detail, and to understand under what conditions it gave good results. It was found that a number of techniques had only a limited scope or suffered from some problem in its design. This led to the development of a new technique based on the display capabilities of a monitor; the adaptation of another technique, globed histogram equalisation, to make it applicable to a wider range of images and the modification of the local histogram equalisation algorithm to smooth different sized regions of the image to the same degree. The resultant algorithms, together with those existing in the literature, were included in the system. The system provides an interactive environment for selecting grey scale transformation techniques. The usual method of choosing a contrast enhancement technique is to apply it, look at the result, discard it if the result is not suitable, or if there is a parameter value to be set, modify its value, and try the technique again. Here a more systematic approach is tried using ideas from Knowledge Based Systems and Object Oriented Systems. A model of the way contrast enhancement techniques are selected is encoded into the system and is used with information obtained by analysing the image (either automatic analysis done by the system, or interactive analysis done with the aid of the user) to select the most appropriate techniques. The techniques selected by the system have to fulfil three quite demanding criteria, ensuring that the system is a reliable and useful tool

    Improving Image Restoration with Soft-Rounding

    Full text link
    Several important classes of images such as text, barcode and pattern images have the property that pixels can only take a distinct subset of values. This knowledge can benefit the restoration of such images, but it has not been widely considered in current restoration methods. In this work, we describe an effective and efficient approach to incorporate the knowledge of distinct pixel values of the pristine images into the general regularized least squares restoration framework. We introduce a new regularizer that attains zero at the designated pixel values and becomes a quadratic penalty function in the intervals between them. When incorporated into the regularized least squares restoration framework, this regularizer leads to a simple and efficient step that resembles and extends the rounding operation, which we term as soft-rounding. We apply the soft-rounding enhanced solution to the restoration of binary text/barcode images and pattern images with multiple distinct pixel values. Experimental results show that soft-rounding enhanced restoration methods achieve significant improvement in both visual quality and quantitative measures (PSNR and SSIM). Furthermore, we show that this regularizer can also benefit the restoration of general natural images.Comment: 9 pages, 6 figure

    Automaattinen syväoppimiseen perustuva puun vuosikasvun analysointi sahateollisuudessa

    Get PDF
    Analysis of wood growth is an important quality control step in a sawmill, as it predicts the structure and load-bearing capabilities of the wood. The annual growth of wood is determined by calculating the distances between the annual rings in a wood end-face. The wood is moving fast in a process line, and manual analysis of woodgrowthisalaborioustaskthatispronetoerrors. Havingtheprocessautomated increases the efficiency and throughput of the sawmill as well as reduces monotonic manual labor, thus providing better working conditions. Automatic counting of annual ring distances has been studied before, however, little research has been done on a sawmill setting which suffers from difficult imaging conditionsandroughwoodend-faceswithvariousdefects. Previousstudieshaveused traditional image processing methods which rely on handcrafted features and fail to generalize well on wood end-faces with varying conditions and arbitrary shaped annual rings. This thesis proposes a general solution to the problem by developing complete end-to-end software for detecting annual rings and analyzing wood growth using deep learning methods. The proposed system is described in detail and compared against traditional computer vision methods. Using data from a real sawmill, the deep learning based approach performs better than the traditional methods.Puun vuosikasvun analysointi on tärkeä osa laadunvarmistusta sahalla, sillä vuosikasvu määrittää puun rakenteen ja kestävyyden. Lankut kulkevat nopeasti tehdaslinjastolla, joten manuaalinen vuosikasvun analysointi on vaivalloista ja virhealtista työtä. Prosessin automatisointi lisää sahan suoritustehoa sekä vapauttaa työntekijän mielekkäämpiin tehtäviin. Puun vuosikasvu määritetään selvittämällä vuosirenkaiden väliset etäisyydet lankun päädystä. Automaattista vuosirenkaiden laskentaa on käsitelty kirjallisuudessa aiemmin, mutta vain muutama tutkimus on tehty sahaympäristössä, jossa kuvausolosuhteet ovat epäotolliset ja puupäädyt ovat karheita ja siistimättömiä. Aiemmat tutkimukset ovat käyttäneet perinteisiä konenäkömenetelmiä, jotka toimivat huonosti vaihtelevan laatuisiin ja muotoisiin puun päätyihin sekä vuosirenkaisiin. Tässä työssä kehitetään automaattinen syväoppimiseen perustuva tietokoneohjelmisto vuosirenkaiden tunnistamiseen ja vuosikasvun analysointiin. Ohjelmisto esitellään läpikotaisesti ja sitä verrataan perinteisiin konenäkömenetelmiin. Vertailussa käytettiin oikealta tehtaalta otettua dataa ja syväoppimiseen perustuva järjestelmä suoriutui perinteisiä menetelmiä paremmin

    STV-based Video Feature Processing for Action Recognition

    Get PDF
    In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end

    Rock Fracture Image Segmentation Algorithms

    Get PDF
    corecore