93 research outputs found

    Identifikasi Karakteristik Citra Berdasarkan pada Nilai Entropi dan Kontras

    Get PDF
    Abstract Determining the object boundaries in an image is a necessary process, to identify the boundaries of an object with other objects as well as to define an object in the image. The acquired image is not always in good condition, on the other hand there is a lot of noise and blur. Various edge detection methods have been developed by providing noise parameters to reduce noise, and adding a blur parameter but because these parameters apply to the entire image, but lossing some edges due to these parameters. This study aims to identify the characteristics of the image region, whether the region condition is noise, blurry or otherwise sharp (clear). The step is done by dividing the four regions from the image size, then calculating the entropy value and contrast value of each formed region. The test results show that changes in region size can produce different characteristics, this is indicated by entropy and contrast values ​​of each formed region. Thus it can be concluded that entropy and contrast can be used as a way to identify image characteristics, and dividing the image into regions provides more detailed image characteristics. &nbsp

    Color-compressive bilateral filter and nonlocal means for high-dimensional images

    Get PDF
    We propose accelerated implementations of bilateral filter (BF) and nonlocal means (NLM) called color-compressive bilateral filter (CCBF) and color-compressive nonlocal means (CCNLM). CCBF and CCNLM are random filters, whose Monte-Carlo averaged output images are identical to the output images of conventional BF and NLM, respectively. However, CCBF and CCNLM are considerably faster because the spatial processing of multiple color channels are combined into a single random filtering process. This implies that the complexity of CCBF and CCNLM is less sensitive to color dimension (e.g., hyperspectral images) relatively to other BF and NLM methods. We experimentally verified that the execution time of CCBF and CCNLM are faster than the existing fast implementations of BF and NLM, respectively

    Mitigation of contrast loss in underwater images

    Get PDF
    The quality of an underwater image is degraded due to the effects of light scattering in water, which are resolution loss and contrast loss. Contrast loss is the main degradation problem in underwater images which is caused by the effect of optical back-scatter. A method is proposed to improve the contrast of an underwater image by mitigating the effect of optical back-scatter after image acquisition. The proposed method is based on the inverse model of an underwater image model, which is validated experimentally in this work. It suggests that the recovered image can be obtained by subtracting the intensity value due to the effect of optical back-scatter from the degraded image pixel and then scaling the remaining by a factor due to the effect of optical extinction. Three filters are proposed to estimate for optical back-scatter in a degraded image. Among these three filters, the performance of BS-CostFunc filter is the best. The physical model of the optical extinction indicates that the optical extinction can be calculated by knowing the level of optical back-scatter. Results from simulations with synthetic images and experiments with real constrained images in monochrome indicate that the maximum optical back-scatter estimation error is less than 5%. The proposed algorithm can significantly improve the contrast of a monochrome underwater image. Results of colour simulations with synthetic colour images and experiments with real constrained colour images indicate that the proposed method is applicable to colour images with colour fidelity. However, for colour images in wide spectral bands, such as RGB, the colour of the improved images is similar to the colour of that of the reference images. Yet, the improved images are darker than the reference images in terms of intensity. The darkness of the improved images is because of the effect of noise on the level of estimation errors.EThOS - Electronic Theses Online Servicety of ManchesterThe Petroleum Institute in Abu DhabiGBUnited Kingdo

    Faster and better: a machine learning approach to corner detection

    Full text link
    The repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application. The repeatability is importand because the same scene viewed from different positions should yield features which correspond to the same real-world 3D locations [Schmid et al 2000]. The efficiency is important because this determines whether the detector combined with further processing can operate at frame rate. Three advances are described in this paper. First, we present a new heuristic for feature detection, and using machine learning we derive a feature detector from this which can fully process live PAL video using less than 5% of the available processing time. By comparison, most other detectors cannot even operate at frame rate (Harris detector 115%, SIFT 195%). Second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency. Third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3D scenes. We show that despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors. Finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and very high quality.Comment: 35 pages, 11 figure

    Hardware-accelerated algorithms in visual computing

    Get PDF
    This thesis presents new parallel algorithms which accelerate computer vision methods by the use of graphics processors (GPUs) and evaluates them with respect to their speed, scalability, and the quality of their results. It covers the fields of homogeneous and anisotropic diffusion processes, diffusion image inpainting, optic flow, and halftoning. In this turn, it compares different solvers for homogeneous diffusion and presents a novel \u27extended\u27 box filter. Moreover, it suggests to use the fast explicit diffusion scheme (FED) as an efficient and flexible solver for nonlinear and in particular for anisotropic parabolic diffusion problems on graphics hardware. For elliptic diffusion-like processes, it recommends to use cascadic FED or Fast Jacobi schemes. The presented optic flow algorithm represents one of the fastest yet very accurate techniques. Finally, it presents a novel halftoning scheme which yields state-of-the-art results for many applications in image processing and computer graphics.Diese Arbeit präsentiert neue parallele Algorithmen zur Beschleunigung von Methoden in der Bildinformatik mittels Grafikprozessoren (GPUs), und evaluiert diese im Hinblick auf Geschwindigkeit, Skalierungsverhalten, und Qualität der Resultate. Sie behandelt dabei die Gebiete der homogenen und anisotropen Diffusionsprozesse, Inpainting (Bildvervollständigung) mittels Diffusion, die Bestimmung des optischen Flusses, sowie Halbtonverfahren. Dabei werden verschiedene Löser für homogene Diffusion verglichen und ein neuer \u27erweiterter\u27 Mittelwertfilter präsentiert. Ferner wird vorgeschlagen, das schnelle explizite Diffusionsschema (FED) als effizienten und flexiblen Löser für parabolische nichtlineare und speziell anisotrope Diffusionsprozesse auf Grafikprozessoren einzusetzen. Für elliptische diffusionsartige Prozesse wird hingegen empfohlen, kaskadierte FED- oder schnelle Jacobi-Verfahren einzusetzen. Der vorgestellte Algorithmus zur Berechnung des optischen Flusses stellt eines der schnellsten und dennoch äußerst genauen Verfahren dar. Schließlich wird ein neues Halbtonverfahren präsentiert, das in vielen Bereichen der Bildverarbeitung und Computergrafik Ergebnisse produziert, die den Stand der Technik repräsentieren

    Contributions to the study of Austism Spectrum Brain conectivity

    Get PDF
    164 p.Autism Spectrum Disorder (ASD) is a largely prevalent neurodevelopmental condition with a big social and economical impact affecting the entire life of families. There is an intense search for biomarkers that can be assessed as early as possible in order to initiate treatment and preparation of the family to deal with the challenges imposed by the condition. Brain imaging biomarkers have special interest. Specifically, functional connectivity data extracted from resting state functional magnetic resonance imaging (rs-fMRI) should allow to detect brain connectivity alterations. Machine learning pipelines encompass the estimation of the functional connectivity matrix from brain parcellations, feature extraction and building classification models for ASD prediction. The works reported in the literature are very heterogeneous from the computational and methodological point of view. In this Thesis we carry out a comprehensive computational exploration of the impact of the choices involved while building these machine learning pipelines
    • …
    corecore