2,060 research outputs found

    Enhancement of dronogram aid to visual interpretation of target objects via intuitionistic fuzzy hesitant sets

    Get PDF
    In this paper, we address the hesitant information in enhancement task often caused by differences in image contrast. Enhancement approaches generally use certain filters which generate artifacts or are unable to recover all the objects details in images. Typically, the contrast of an image quantifies a unique ratio between the amounts of black and white through a single pixel. However, contrast is better represented by a group of pix- els. We have proposed a novel image enhancement scheme based on intuitionistic hesi- tant fuzzy sets (IHFSs) for drone images (dronogram) to facilitate better interpretations of target objects. First, a given dronogram is divided into foreground and background areas based on an estimated threshold from which the proposed model measures the amount of black/white intensity levels. Next, we fuzzify both of them and determine the hesitant score indicated by the distance between the two areas for each point in the fuzzy plane. Finally, a hyperbolic operator is adopted for each membership grade to improve the pho- tographic quality leading to enhanced results via defuzzification. The proposed method is tested on a large drone image database. Results demonstrate better contrast enhancement, improved visual quality, and better recognition compared to the state-of-the-art methods.Web of Science500866

    Adaptive Filters for 2-D and 3-D Digital Images Processing

    Get PDF
    Práce se zabývá adaptivními filtry pro vizualizaci obrazů s vysokým rozlišením. V teoretické části je popsán princip činnosti konfokálního mikroskopu a matematicky korektně zaveden pojem digitální obraz. Pro zpracování obrazů je volen jak frekvenční přístup (s využitím 2-D a 3-D diskrétní Fourierovy transformace a frekvenčních filtrů), tak přístup pomocí digitální geometrie (s využitím adaptivní ekvalizace histogramu s adaptivním okolím). Dále jsou popsány potřebné úpravy pro práci s neideálními obrazy obsahujícími aditivní a impulzní šum. Závěr práce se věnuje prostorové rekonstrukci objektů na základě jejich optických řezů. Veškeré postupy a algoritmy jsou i prakticky zpracovány v softwaru, který byl vyvinut v rámci této práce.The thesis is concerned with filters for visualization of high dynamic range images. In the theoretical part, the principle of confocal microscopy is described and the term digital image is defined in a mathematically correct way. Both frequency approach (using 2-D and 3-D discrete Fourier transform and frequency filters) and digital geometry approach (using adaptive histogram equalization with adaptive neighbourhood) are chosen for the processing of images. Necessary adjustments when working with non-ideal images containing additive and impulse noise are described as well. The last part of the thesis is interested in 3-D reconstruction from optical cuts of an object. All the procedures and algorithms are also implemented in the software developed as a part of this thesis.

    The beneficial techniques in preprocessing step of skin cancer detection system comparing

    Full text link
    © 2014 The Authors. Automatic diagnostics of skin cancer is one of the most challenging problems in medical image processing. It helps physicians to decide whether a skin melanoma is benign or malignant. So, determining the more efficient methods of detection to reduce the rate of errors is a vital issue among researchers. Preprocessing is the first stage of detection to improve the quality of images, removing the irrelevant noises and unwanted parts in the background of the skin images. The purpose of this paper is to gather the preprocessing approaches can be used in skin cancer images. This paper provides good starting for researchers in their automatic skin cancer detections

    The Beneficial Techniques in Preprocessing Step of Skin Cancer Detection System Comparing

    Get PDF
    AbstractAutomatic diagnostics of skin cancer is one of the most challenging problems in medical image processing. It helps physicians to decide whether a skin melanoma is benign or malignant. So, determining the more efficient methods of detection to reduce the rate of errors is a vital issue among researchers. Preprocessing is the first stage of detection to improve the quality of images, removing the irrelevant noises and unwanted parts in the background of the skin images. The purpose of this paper is to gather the preprocessing approaches can be used in skin cancer images. This paper provides good starting for researchers in their automatic skin cancer detections

    thermogram Breast Cancer Detection : a comparative study of two machine learning techniques

    Get PDF
    Breast cancer is considered one of the major threats for women’s health all over the world. The World Health Organization (WHO) has reported that 1 in every 12 women could be subject to a breast abnormality during her lifetime. To increase survival rates, it is found that it is very effective to early detect breast cancer. Mammography-based breast cancer screening is the leading technology to achieve this aim. However, it still can not deal with patients with dense breast nor with tumor size less than 2 mm. Thermography-based breast cancer approach can address these problems. In this paper, a thermogram-based breast cancer detection approach is proposed. This approach consists of four phases: (1) Image Pre-processing using homomorphic filtering, top-hat transform and adaptive histogram equalization, (2) ROI Segmentation using binary masking and K-mean clustering, (3) feature extraction using signature boundary, and (4) classification in which two classifiers, Extreme Learning Machine (ELM) and Multilayer Perceptron (MLP), were used and compared. The proposed approach is evaluated using the public dataset, DMR-IR. Various experiment scenarios (e.g., integration between geometrical feature extraction, and textural features extraction) were designed and evaluated using different measurements (i.e., accuracy, sensitivity, and specificity). The results showed that ELM-based results were better than MLP-based ones with more than 19%

    The Second Hungarian Workshop on Image Analysis : Budapest, June 7-9, 1988.

    Get PDF

    Color Image Processing based on Graph Theory

    Full text link
    [ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.[CA] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955TESI

    Enhancement in Footprint Image using Diverse Filtering Technique

    Get PDF
    AbstractFootprint identification is the measurement of footprint features for recognizing the identity of a user. Footprint is universal, easy to capture and does not change across time. The image enhancement is carried out to obtain an accurate image. One of the main objectives of any image filter is enhancing the pictorial information of the image. Hence spatial domain techniques are used which directly process the pixel array of the given input image. In this paper, we have used various filtering techniques to enhance the image. Compare to other techniques proves to be better for accuracy

    Automatic Blood Vessel Extraction of Fundus Images Employing Fuzzy Approach

    Get PDF
    Diabetic Retinopathy is a retinal vascular disease that is characterized by progressive deterioration of blood vessels in the retina and is distinguished by the appearance of different types of clinical lesions like microaneurysms, hemorrhages, exudates etc. Automated detection of the lesions plays significant role for early diagnosis by enabling medication for the treatment of severe eye diseases preventing visual loss. Extraction of blood vessels can facilitate ophthalmic services by automating computer aided screening of fundus images. This paper presents blood vessel extraction algorithms with ensemble of pre-processing and post-processing steps which enhance the image quality for better analysis of retinal images for automated detection. Extensive performance based evaluation of the proposed approaches is done over four databases on the basis of statistical parameters. Comparison of both blood vessel extraction techniques on different databases reveals that fuzzy based approach gives better results as compared to Kirsch’s based algorithm. The results obtained from this study reveal that 89% average accuracy is offered by the proposed MBVEKA and 98% for proposed BVEFA

    Expert System with an Embedded Imaging Module for Diagnosing Lung Diseases

    Get PDF
    Lung diseases are one of the major causes of suffering and death in the world. Improved survival rate could be obtained if the diseases can be detected at its early stage. Specialist doctors with the expertise and experience to interpret medical images and diagnose complex lung diseases are scarce. In this work, a rule-based expert system with an embedded imaging module is developed to assist the general physicians in hospitals and clinics to diagnose lung diseases whenever the services of specialist doctors are not available. The rule-based expert system contains a large knowledge base of data from various categories such as patient's personal and medical history, clinical symptoms, clinical test results and radiological information. An imaging module is integrated into the expert system for the enhancement of chest X-Ray images. The goal of this module is to enhance the chest X-Ray images so that it can provide details similar to more expensive methods such as MRl and CT scan. A new algorithm which is a modified morphological grayscale top hat transform is introduced to increase the visibility of lung nodules in chest X-Rays. Fuzzy inference technique is used to predict the probability of malignancy of the nodules. The output generated by the expert system was compared with the diagnosis made by the specialist doctors. The system is able to produce results\ud which are similar to the diagnosis made by the doctors and is acceptable by clinical standards
    corecore