1,099 research outputs found

    An Image Enhancement Approach to Achieve High Speed Using Adaptive Modified Bilateral Filter for Satellite Images Using FPGA

    Get PDF
    For real time application scenarios of image processing, satellite imaginary has grown more interest by researches due to the informative nature of image. Satellite images are captured using high quality cameras. These images are captured from space using on-board cameras. Wrong ISO setting, camera vibrations or wrong sensory setting causes noise. The degraded image can cause less efficient results during visual perception which is a challenging issue for researchers. Another reason is that noise corrupts the image during acquisition, transmission, interference or dust particles on the scanner screen of image from satellite to the earth stations. If quality degraded images are used for further processing then it may result in wrong information extraction. In order to cater this issue, image filtering or denoising approach is required. Since remote sensing images are captured from space using on-board camera which requires high speed operating device which can provide better reconstruction quality by utilizing lesser power consumption. Recently various approaches have been proposed for image filtering. Key challenges with these approaches are reconstruction quality, operating speed, image quality by preserving information at edges on image. Proposed approach is named as modified bilateral filter. In this approach bilateral filter and kernel schemes are combined. In order to overcome the drawbacks, modified bilateral filtering by using FPGA to perform the parallelism process for denoising is implemented

    Restoration for blurred noisy images based on guided filtering and inverse filter

    Get PDF
    The development of complex life leads into a need using images in several fields, because these images degraded during capturing the image from mobiles, cameras and persons who do not have sufficient experience in capturing images. It was important using techniques differently to improve images and human perception as image enhancement and image restoration etc. In this paper, restoration noisy blurred images by guided filter and inverse filtering can be used for enhancing images from different types of degradation was proposed. In the color images denoising process, it was very significant for improving the edge and texture information. Eliminating noise can be enhanced by the image quality. In this article, at first, The color images were taken. Then, random noise and blur were added to the images. Then, the noisy blurred image passed to the guided filtering to get on denoised image. Finally, an inverse filter applied to the blurred image by convolution an image with a mask and getting on the enhanced image. The results of this research illustrated good outcomes compared with other methods for removing noise and blur based on PSNR measure. Also, it enhanced the image and retained the edge details in the denoising process. PSNR and SSIM measures were more sensitive to Gaussian noise than blur

    Integrating IoT and Novel Approaches to Enhance Electromagnetic Image Quality using Modern Anisotropic Diffusion and Speckle Noise Reduction Techniques

    Get PDF
    Electromagnetic imaging is becoming more important in many sectors, and this requires high-quality pictures for reliable analysis. This study makes use of the complementary relationship between IoT and current image processing methods to improve the quality of electromagnetic images. The research presents a new framework for connecting Internet of Things sensors to imaging equipment, allowing for instantaneous input and adjustment. At the same time, the suggested system makes use of sophisticated anisotropic diffusion algorithms to bring out key details and hide noise in electromagnetic pictures. In addition, a cutting-edge technique for reducing speckle noise is used to combat this persistent issue in electromagnetic imaging. The effectiveness of the suggested system was determined via a comparison to standard imaging techniques. There was a noticeable improvement in visual sharpness, contrast, and overall clarity without any loss of information, as shown by the results. Incorporating IoT sensors also facilitated faster calibration and real-time modifications, which opened up new possibilities for use in contexts with a high degree of variation. In fields where electromagnetic imaging plays a crucial role, such as medicine, remote sensing, and aerospace, the ramifications of this study are far-reaching. Our research demonstrates how the Internet of Things (IoT) and cutting-edge image processing have the potential to dramatically improve the functionality and versatility of electromagnetic imaging systems

    GENETIC FUZZY FILTER BASED ON MAD AND ROAD TO REMOVE MIXED IMPULSE NOISE

    Get PDF
    In this thesis, a genetic fuzzy image filtering based on rank-ordered absolute differences (ROAD) and median of the absolute deviations from the median (MAD) is proposed. The proposed method consists of three components, including fuzzy noise detection system, fuzzy switching scheme filtering, and fuzzy parameters optimization using genetic algorithms (GA) to perform efficient and effective noise removal. Our idea is to utilize MAD and ROAD as measures of noise probability of a pixel. Fuzzy inference system is used to justify the degree of which a pixel can be categorized as noisy. Based on the fuzzy inference result, the fuzzy switching scheme that adopts median filter as the main estimator is applied to the filtering. The GA training aims to find the best parameters for the fuzzy sets in the fuzzy noise detection. From the experimental results, the proposed method has successfully removed mixed impulse noise in low to medium probabilities, while keeping the uncorrupted pixels less affected by the median filtering. It also surpasses the other methods, either classical or soft computing-based approaches to impulse noise removal, in MAE and PSNR evaluations. It can also remove salt-and-pepper and uniform impulse noise well

    Color Image Processing based on Graph Theory

    Full text link
    [ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.[CA] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955TESI

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    Robust Framework For Digital Image Denoising And Deblurring

    Get PDF
    Image restoration concerns improving visual quality of a captured image that goes beyond the achievable limit of camera. Recent advancement in imaging and multimedia technology has advocated the interests of image restoration through software, of which applications permeate consumer photography as well as different industries. Unfortunately, the captured images often suffer from degradations, such as blurring, noise, unpleasant artifacts, and more, due to limitations of the imaging system. Despite considerable efforts have been channeled to advance the state-of-the-art methods, surprisingly, these methods are often slow and only designed for handling specific degradation model

    Ultrasonic image analysis in real time spot welding applications.

    Get PDF

    Biometric encryption system for increased security

    Get PDF
    Security is very important in present day life. In this highly-interconnected world, most of our daily activities are computer based, and the data transactions are protected by passwords. These passwords identify various entities such as bank accounts, mobile phones, etc. People might reuse the same password, or passwords related to an individual that can lead to attacks. Indeed, remembering several passwords can become a tedious task. Biometrics is a science that measures an individual’s physical characteristics in a unique way. Thus, biometrics serves as a method to replace the cumbersome use of complex passwords. Our research uses the features of biometrics to efficiently implement a biometric encryption system with a high level of security

    Development of Impulsive Noise Detection Schemes for Selective Filtering in Images

    Get PDF
    Image Noise Suppression is a highly demanded approach in digital imaging systems design. Impulsive noise is one such noise, which is frequently encountered problem in acquistion, transmission and processing of images. In the area of image restoration, many state-of-the art filters consist of two main processes, classification (detection) and reconstruction (filtering). Classification is used to separate uncorrupted pixels from corrupted pixels. Reconstruction involves replacing the corrupted pixels by certain approximation technique. In this thesis such schemes of impulsive noise detection and filtering thereof are proposed. Impulsive noise can be Salt & Pepper Noise (SPN) or Random Valued Impulsive Noise (RVIN). Only RVIN model is considered in this thesis because of its realistic presence. In the RVIN model a corrupted pixel can take any value in the valid range. Adaptive threshold selection is emphasized for all the four proposed noise detection schemes. Incorporation of adaptive threshold into the noise detection process led to more reliable and more efficient detection of noise. Based on the noisy image characteristics and their statistics, threshold values are selected. To validate the efficacy of proposed noise filtering schemes, an application to image sharpening has been investigated under the noise conditions. It has been observed, if the noisy image passes through the sharpening scheme, the noise gets amplified and as a result the restored results are distorted. However, the prefiltering operations using the proposed schemes enhances the result to a greater extent. Extensive simulations and comparisons are done with competent schemes. It is observed, in general, that the proposed schemes are better in suppressing impulsive noise at different noise ratios than their counterparts
    corecore