301 research outputs found

    Identification of Shadowed Areas to Improve Ragweed Leaf Segmentation

    Get PDF
    As part of a project targeting geometrical structure analysis and identification of ragweed leaves, sample images were created. Even though images were taken under near optimal conditions, the samples still contain noise of cast shadow. The proposed method improves chromaticity based primary shape segmentation efficiency by identification and re-classification of the shadowed areas. The primary classification of each point is done generally based on thresholding the Hue channel of Hue/Saturation/Value color space. In this work, the primary classification is enhanced by thresholding an intra-class normalized weight computed from the class specific Value channel. The corrective step is the removal of areas marked as shadow from the object class. The idea is based on the assumption that the image contains a single, flat leaf in front of a homogeneous background, but there are no color and illumination restrictions. Thus, parameters of the imaging system and the light sources have influence on homogeneity of image parts; however vague shadows differ only in intensity, and hard shadows can only be dropped on the background

    Automatic Segmentation of Trees in Dynamic Outdoor Environments

    Get PDF
    Segmentation in dynamic outdoor environments can be difficult when the illumination levels and other aspects of the scene cannot be controlled. Specifically in orchard and vineyard automation contexts, a background material is often used to shield a camera\u27s field of view from other rows of crops. In this paper, we describe a method that uses superpixels to determine low texture regions of the image that correspond to the background material, and then show how this information can be integrated with the color distribution of the image to compute optimal segmentation parameters to segment objects of interest. Quantitative and qualitative experiments demonstrate the suitability of this approach for dynamic outdoor environments, specifically for tree reconstruction and apple flower detection application

    Microcalcifications Detection Using Image And Signal Processing Techniques For Early Detection Of Breast Cancer

    Get PDF
    Breast cancer has transformed into a severe health problem around the world. Early diagnosis is an important factor to survive this disease. The earliest detection signs of potential breast cancer that is distinguishable by current screening techniques are the presence of microcalcifications (MCs). MCs are small crystals of calcium apatite and their normal size ranges from 0.1mm to 0.5mm single crystals to groups up to a few centimeters in diameter. They are the first indication of breast cancer in more than 40% of all breast cancer cases, making their diagnosis critical. This dissertation proposes several segmentation techniques for detecting and isolating point microcalcifications: Otsu’s Method, Balanced Histogram Thresholding, Iterative Method, Maximum Entropy, Moment Preserving, and Genetic Algorithm. These methods were applied to medical images to detect microcalcifications. In this dissertation, results from the application of these techniques are presented and their efficiency for early detection of breast cancer is explained. This dissertation also explains theories and algorithms related to these techniques that can be used for breast cancer detection

    An Efficient Machine Learning Approach for Prediction of Conjunctiva Hyperemia Assessment using Feature Extraction Methods

    Get PDF
    The human eye is one of the most intricate sense organs. It is crucial to protect your eyes against several eye disorders that can cause vision loss if untreated in order to maintain your ability to see well. Early detection of eye diseases is therefore crucial in order to prevent any unintended consequences and control the diseases continued progression. Conjunctivitis is one such eye condition that is characterized by conjunctival inflammation, resulting in symptoms like hyperemia (redness) due to increased blood flow. With the aid of the best treatments, modern techniques, and early, precise diagnosis by professionals, it can be cured or can be greatly reduced. The proper diagnosis of the underlying cause of visual problems is frequently postponed or never carried out because of  shortage of diagnostic experts, which leads to either insufficient or postponed corrective treatment. In order to diagnose and evaluate conjunctivitis, segmentation methods are essential for locating and measuring hyperemic regions. In the present study, segmentation techniques are applied along  with feature extraction techniques to provide an effective machine learning framework for the prediction of eye problems. Using the discrete cosine transform (DCT), the segmented regions of interest are converted into feature vectors. These feature vectors are then used to train machine learning classifiers, including random forest and neural networks, which achieve a promising accuracy of 95.92%. This approach enables ophthalmologists to make more objective and accurate assessments, aiding in disease severity evaluation

    A novel regional-minima image segmentation method for fluid transport simulations in unresolved rock images

    Get PDF
    This study would not be possible without digital rock images provided by Digital Rocks Portal and its contributors (https://www.digitalrocksportal.org/). .Peer reviewe

    Automatic expert system based on images for accuracy crop row detection in maize fields

    Get PDF
    This paper proposes an automatic expert system for accuracy crop row detection in maize fields based on images acquired from a vision system. Different applications in maize, particularly those based on site specific treatments, require the identification of the crop rows. The vision system is designed with a defined geometry and installed onboard a mobile agricultural vehicle, i.e. submitted to vibrations, gyros or uncontrolled movements. Crop rows can be estimated by applying geometrical parameters under image perspective projection. Because of the above undesired effects, most often, the estimation results inaccurate as compared to the real crop rows. The proposed expert system exploits the human knowledge which is mapped into two modules based on image processing techniques. The first one is intended for separating green plants (crops and weeds) from the rest (soil, stones and others). The second one is based on the system geometry where the expected crop lines are mapped onto the image and then a correction is applied through the well-tested and robust Theil–Sen estimator in order to adjust them to the real ones. Its performance is favorably compared against the classical Pearson product–moment correlation coefficient

    Smoke plume segmentation of wildfire images

    Get PDF
    Aquest treball s'emmarca dins del camp d'estudi de les xarxes neuronals en Aprenentatge profund. L'objectiu del projecte és analitzar i aplicar les xarxes neuronals que hi ha avui dia en el mercat per resoldre un problema en específic. Aquest és tracta de la segmentació de plomalls de fum en incendis forestals. S'ha desenvolupat un estudi de les xarxes neuronals utilitzades per resoldre problemes de segmentació d'imatges i també una reconstrucció posterior en 3D d'aquests plomalls de fum. L'algorisme finalment escollit és tracta del model UNet, una xarxa neuronal convolucional basada en l'estructura d'autoencoders amb connexions de pas, que desenvolupa tasques d'autoaprenentatge per finalment obtenir una predicció de la classe a segmentar entrenada, en aquest cas plomalls. de fum. Posteriorment, una comparativa entre algoritmes tradicionals i el model UNet aplicat fent servir aprenentatge profund s'ha realitzat, veient que tant quantitativament com qualitativament s'aconsegueix els millors resultats aplicant el model UNet, però a la vegada comporta més temps de computació. Tots aquests models s'han desenvolupat amb el llenguatge de programació Python utilitzant els llibres d'aprenentatge automàtic Tensorflow i Keras. Dins del model UNet s'han dut a terme múltiples experiments per obtenir els diferents valors dels hiperparàmetres més adequats per a l'aplicació del projecte, obtenint una precisió del 93.45 % en el model final per a la segmentació de fum en imatges d'incendis. forestals.Este trabajo se enmarca dentro del campo de estudio de las redes neuronales en aprendizaje profundo. El objetivo del proyecto es analizar y aplicar las redes neuronales que existen hoy en día en el mercado para resolver un problema en específico. Éste se trata de la segmentación de penachos de humo en incendios forestales. Se ha desarrollado un estudio de las redes neuronales utilizadas para resolver problemas de segmentación de imágenes y también una reconstrucción posterior en 3D de estos penachos de humo. El algoritmo finalmente escogido se trata del modelo UNet, una red neuronal convolucional basada en la estructura de autoencoders con conexiones de paso, que desarrolla tareas de autoaprendizaje para finalmente obtener una predicción de la clase a segmentar entrenada, en este caso penachos de humo. Posteriormente, una comparativa entre algoritmos tradicionales y el modelo UNet aplicado utilizando aprendizaje profundo se ha realizado, viendo que tanto cuantitativa como cualitativamente se consigue los mejores resultados aplicando el modelo UNet, pero a la vez conlleva más tiempo de computación. Todos estos modelos se han desarrollado con el lenguaje de programación Python utilizando libros de aprendizaje automático Tensorflow y Keras. Dentro del modelo UNet se han llevado a cabo múltiples experimentos para obtener los distintos valores de los hiperparámetros más adecuados para la aplicación del proyecto, obteniendo una precisión del 93.45 % en el modelo final para la segmentación de humo en imágenes de incendios forestales.This work is framed within the field of study of neural networks in Deep Learning. The aim of the project is to analyse and apply the neural networks that exist today in the market to solve a specific problem. This is about the segmentation of smoke plumes in forest fires. A study of the neural networks used to solve image segmentation problems and also a subsequent 3D reconstruction of these smoke plumes has been developed. The algorithm finally chosen is the UNet model, a convolutional neural network based on the structure of autoencoders with step connections, which develops self-learning tasks to finally obtain a prediction of the class to be trained, in this case smoke plumes. Also, a comparison between traditional algorithms and the UNet model applied using deep learning has been carried out, seeing that both quantitatively and qualitatively the best results are achieved by applying the UNet model, but at the same time it involves more computing time. All these models have been developed in the Python programming language using the Tensorflow and Keras machine learning books. Within the UNet model, multiple experiments have been carried out to obtain the different hyperparameter values most suitable for the project application, obtaining an accuracy of 93.45% in the final model for smoke segmentation in wildfire images

    A novel license plate character segmentation method for different types of vehicle license plates.

    Get PDF
    License plate character segmentation (LPCS) is a very important part of vehicle license plate recognition (LPR) system. The accuracy of LPR system widely depends on two parts; namely license plate detection (LPD) and LPCS. Different country has different types and shapes of LPs are available. Based on character position on LP, we can find two types of LPs over the world, single row (SR) and double rows (DR) LP. Most of the LPCS methods are generally used for SRLP. This paper proposed a novel LPCS method for SR and DR types of LPs. Experimental results shows the real-time effectiveness of our proposed method. The accuracy of our proposed LPCS method is 99.05% and the average computational time is 27ms which is higher than other existing methods

    Fast marching over the 2D Gabor magnitude domain for tongue body segmentation

    Get PDF
    Author name used in this publication: David ZhangVersion of RecordPublishe

    Automatic segmentation in CMR - Development and validation of algorithms for left ventricular function, myocardium at risk and myocardial infarction

    Get PDF
    In this thesis four new algorithms are presented for automatic segmentation in cardiovascular magnetic resonance (CMR); automatic segmentation of the left ventricle, myocardial infarction, and myocardium at risk in two different image types. All four algorithms were implemented in freely available software for image analysis and were validated against reference delineations with a low bias and high regional agreement. CMR is the most accurate and reproducible method for assessment of left ventricular mass and volumes and reference standard for assessment of myocardial infarction. CMR is also validated against single photon emission computed tomography (SPECT) for assessment of myocardium at risk up to one week after acute myocardial infarction. However, the clinical standard for quantification of left ventricular mass and volumes is manual delineation which has been shown to have a large bias between observers from different sites and for myocardium at risk and myocardial infarction there is no clinical standard due to varying results shown for the previously suggested threshold methods. The new automatic algorithms were all based on intensity classification by Expectation Maximization (EM) and incorporation of a priori information specific for each application. Validation was performed in large cohorts of patients with regards to bias in clinical parameters and regional agreement as Dice Similarity Coefficient (DSC). Further, images with reference delineation of the left ventricle were made available for future benchmarking of left ventricular segmentation, and the new automatic algorithms for segmentation of myocardium at risk and myocardial infarction were directly compared to the previously suggested intensity threshold methods. Combining intensity classification by EM with a priori information as in the new automatic algorithms was shown superior to previous methods and specifically to the previously suggested threshold methods for myocardium at risk and myocardial infarction. Added value of using a priori information and intensity correction was shown significant measured by DSC even though not significant for bias. For the previously suggested methods of infarct quantification a poorer result was found in the new multi-center, multi-vendor patient data than in the original validation in animal studies or single center patient studies. Thus, the results in this thesis also show the importance ofusing both bias and DSC for validation and performing validation in images of representative quality as in multi-center, multi-vendor patient studies
    corecore