7 research outputs found

    Quality assessment metrics for edge detection and edge-aware filtering: A tutorial review

    Full text link
    The quality assessment of edges in an image is an important topic as it helps to benchmark the performance of edge detectors, and edge-aware filters that are used in a wide range of image processing tasks. The most popular image quality metrics such as Mean squared error (MSE), Peak signal-to-noise ratio (PSNR) and Structural similarity (SSIM) metrics for assessing and justifying the quality of edges. However, they do not address the structural and functional accuracy of edges in images with a wide range of natural variabilities. In this review, we provide an overview of all the most relevant performance metrics that can be used to benchmark the quality performance of edges in images. We identify four major groups of metrics and also provide a critical insight into the evaluation protocol and governing equations

    Salt and pepper noise reduction and edge detection algorithm based on neutrosophic logic

    Get PDF
    Neutrosophic set (NS) is a powerful tool to deal with indeterminacy. In this paper, the neutrosophic set is applied to the image domain and a novel edge detection technique is proposed. Noise reduction of images is a challenging task in image processing. Salt and pepper noise is one kind of noise that affects a grayscale image significantly. Generally, the median filter is used to reduce salt and pepper noise; it gives optimum results while compared to other image filters. Median filter works only up to a certain level of noise intensity. Here we proposed a neighborhood-based image filter called nbd-filter, it works perfectly for gray image regardless of noise intensity. It reduces salt and pepper noise significantly at any noise level and produces a noise-free image. Further, we proposed an edge detection algorithm based on the neutrosophic set, it detectsedges efficiently for images corrupted by noise and noise-free images. Since most of the real-life images consists of indeterminate regions, neutrosophy is a perfect tool for edge detection. The main advantage of the proposed edge detector is, it is a simple and efficient technique and detect edges more efficient than conventional edge detectors

    Direct field-to-pattern monolithic design of holographic metasurface via residual encoder-decoder convolutional neural network

    Get PDF
    Complex-amplitude holographic metasurfaces (CAHMs) with the flexibility in modulating phase and amplitude profiles have been used to manipulate the propagation of wavefront with an unprecedented level, leading to higher image-reconstruction quality compared with their natural counterparts. However, prevailing design methods of CAHMs are based on Huygens-Fresnel theory, meta-atom optimization, numerical simulation and experimental verification, which results in a consumption of computing resources. Here, we applied residual encoder-decoder convolutional neural network to directly map the electric field distributions and input images for monolithic metasurface design. A pretrained network is firstly trained by the electric field distributions calculated by diffraction theory, which is subsequently migrated as transfer learning framework to map the simulated electric field distributions and input images. The training results show that the normalized mean pixel error is about 3% on dataset. As verification, the metasurface prototypes are fabricated, simulated and measured. The reconstructed electric field of reverse-engineered metasurface exhibits high similarity to the target electric field, which demonstrates the effectiveness of our design. Encouragingly, this work provides a monolithic field-to-pattern design method for CAHMs, which paves a new route for the direct reconstruction of metasurfaces

    Analysis of a robust edge detection system in different color spaces using color and depth images

    Get PDF
    Edge detection is very important technique to reveal significant areas in the digital image, which could aids the feature extraction techniques. In fact it is possible to remove un-necessary parts from image, using edge detection. A lot of edge detection techniques has been made already, but we propose a robust evolutionary based system to extract the vital parts of the image. System is based on a lot of pre and post-processing techniques such as filters and morphological operations, and applying modified Ant Colony Optimization edge detection method to the image. The main goal is to test the system on different color spaces, and calculate the system’s performance. Another novel aspect of the research is using depth images along with color ones, which depth data is acquired by Kinect V.2 in validation part, to understand edge detection concept better in depth data. System is going to be tested with 10 benchmark test images for color and 5 images for depth format, and validate using 7 Image Quality Assessment factors such as Peak Signal-to-Noise Ratio, Mean Squared Error, Structural Similarity and more (mostly related to edges) for prove, in different color spaces and compared with other famous edge detection methods in same condition. Also for evaluating the robustness of the system, some types of noises such as Gaussian, Salt and pepper, Poisson and Speckle are added to images, to shows proposed system power in any condition. The goal is reaching to best edges possible and to do this, more computation is needed, which increases run time computation just a bit more. But with today’s systems this time is decreased to minimum, which is worth it to make such a system. Acquired results are so promising and satisfactory in compare with other methods available in validation section of the paper

    Bayesian Dictionary Learning for Single and Coupled Feature Spaces

    Get PDF
    Over-complete bases offer the flexibility to represent much wider range of signals with more elementary basis atoms than signal dimension. The use of over-complete dictionaries for sparse representation has been a new trend recently and has increasingly become recognized as providing high performance for applications such as denoise, image super-resolution, inpaiting, compression, blind source separation and linear unmixing. This dissertation studies the dictionary learning for single or coupled feature spaces and its application in image restoration tasks. A Bayesian strategy using a beta process prior is applied to solve both problems. Firstly, we illustrate how to generalize the existing beta process dictionary learning method (BP) to learn dictionary for single feature space. The advantage of this approach is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. Next, we propose a new beta process joint dictionary learning method (BP-JDL) for coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Two applications, single image super-resolution and inverse halftoning, are chosen to evaluate the performance of the proposed Bayesian approach. In both cases, the Bayesian approach, either for single feature space or coupled feature spaces, outperforms state-of-the-art methods in comparative domains

    Avalia??o de Acidente Vascular Cerebral em Tomografia Computadorizada Utilizando Algoritmo de Otimiza??o de Formigas

    Get PDF
    O acidente vascular cerebral (AVC) ? uma das maiores causas de morte e de incapacidades neurol?gicas do mundo, sendo a doen?a neurol?gica mais comum e potencialmente mais devastadora, e por essa raz?o ? respons?vel por um grande n?mero de pesquisas e inova??es na ?rea de imagens m?dicas. No Brasil h? uma distribui??o extremamente desigual de recursos m?dicos de boa qualidade em decorr?ncia de sua grande extens?o territorial. Dessa forma, existem in?meros locais e servi?os de sa?de em que n?o h? a presen?a de um especialista em radiologia para observar as imagens de tomografia computadorizada (TC). Por essa raz?o h? uma motiva??o para o desenvolvimento de sistemas computadorizados para o aux?lio ao diagn?stico de doen?as utilizando t?cnicas de processamento de imagens. T?cnicas de processamento digital de imagens podem ser utilizadas para auxiliar o diagn?stico m?dico dessa patologia, possibilitando um diagn?stico mais r?pido, bem como um acompanhamento da ?rea de extens?o das les?es isqu?micas e hemorr?gicas causadas pelo AVCi (isqu?mico) ou AVCh (hemorr?gico). Ent?o, os algoritmos desenvolvidos para detec??o de AVC poderiam ser utilizados para auxiliar cl?nicos, ou outros profissionais de sa?de, para que esses possam ou encaminhar para algum centro especializado pr?ximo ou iniciar o tratamento adequado o mais r?pido poss?vel melhorando o progn?stico dos pacientes acometidos pela patologia. Neste trabalho foram desenvolvidos e implementados cinco algoritmos para detectar e real?ar as ?reas de AVCi e AVCh em imagens de TC de cr?nio, dos quais tr?s foram utilizados para detec??o de AVCi agudo/subagudo (nos est?gios iniciais) e dois para detec??o de AVCh. Inicialmente, foram implementados os algoritmos para a detec??o dessas duas patologias baseados em limiariza??o, e em seguida foi implementado o algoritmo de segmenta??o de imagens baseado em ACO (Ant Colony Optimization) e k-means. Baseado nessa segmenta??o com ACO foi desenvolvido um algoritmo de detec??o de AVCh, um algoritmo de detec??o dos ventr?culos cerebrais e posterior detec??o do AVCi utilizando a limiariza??o e um algoritmo de detec??o de AVCi agudo/subagudo. Em seguida, foram calculados e analisados os resultados estat?sticos para cada um dos algoritmos implementados, analisando a detec??o por paciente, por cortes e por pixels. Assim, sendo realizada uma avalia??o da detec??o dos dois tipos de AVC em rela??o a cada um dos algoritmos desenvolvidos. Os melhores resultados obtidos para a detec??o do AVCh foram com o algoritmo de segmenta??o baseado no ACO que apresenta uma sensibilidade, uma especificidade e uma acur?cia na detec??o por paciente de 100%, por corte apresenta uma sensibilidade de 51%, uma especificidade de 100% e uma acur?cia de 99%, e por pixel possui uma sensibilidade de 34%, uma especificidade de 99% e uma acur?cia de 99%. O processamento do conjunto das 22 imagens de cada paciente foi realizado em 1 minuto e 15 segundos por esse algoritmo. De forma semelhante, os melhores resultados para a detec??o do AVCi foram obtidos com o algoritmo ACO para a detec??o da ?rea de isquemia, que apresenta uma sensibilidade de 72%, uma especificidade de 88% e uma acur?cia na detec??o por paciente de 88%, por corte apresenta uma sensibilidade de 27%, uma especificidade de 98% e uma acur?cia de 98%, e por pixel possui uma sensibilidade de 12%, uma especificidade de 99% e uma acur?cia de 99%. Esse algoritmo possui um tempo de processamento para o conjunto de 20 imagens de um paciente de 1 minuto e 5 segundos.CAPE

    Color difference evaluation for digital pictorial images under various surround conditions

    Get PDF
    Department of Human and Systems EngineeringNowadays pictorial images are more often shown on displays than on paper. Therefore, displaymanufacturers have been trying to improve the image quality of their displays to increase theirmarket share. To improve the image quality, not only good hardware technology and imageprocessing algorithms but also good color difference equations are needed to predict the overallcolor difference between pictorial images shown on different panels or manipulated using differentimage processing algorithms etc. Color difference equations are also developed for the use ofindustrial purposes. However they were developed in limits of surround condition and applications. The purposes of this research is to clarify the effect of surround condition and magnitude of colordifference on perceptual color difference for complex image. The experiment under four surroundconditions was carried out to achieve these purposes. The collected data by psychophysicalexperiment was used to develop image color difference metric under various surround condition.Before conducting the main experiment, pilot test was conducted to investigate the effect of colordifference magnitude on perceptual image difference. Pilot experiment tested the performance of each conventional color difference equation such asCIE ??E*ab, CMC(l:c) and CIEDE2000 while the psychophysical experiment including wide rangeof color difference stimuli was conducted to evaluate perceptual color difference between originalimage and manipulated image. Twenty observers were participated in the experiment and 195stimuli were used for the magnitude estimation. Main Experiment was investigated how perceptual color difference is shifted by changing surroundluminance level and magnitude of color difference. There are four surround conditions, dark, dim,average and bright. 996 stimuli were prepared for the experiment. 500 randomly selected stimuliwere used for each surround condition. Twenty-three observers were participated in the mainexperiment. They were asked to evaluate the perceived color difference between original andmanipulated images with magnitude estimation method.ope
    corecore