32,281 research outputs found

    Identification of Mouth Cancer laceration Using Machine Learning Approach

    Get PDF
    This Paper describes about Identification of Mouth Cancer laceration Using Machine Learning Approach .The SVM algorithm is used for this purpose. Image segmentation operations are performed using: Resizing an image, Gray scale conversion, Histogram equalization and Classifying the Segmented image using SVM. SVM is used to reduce the complexity faced in the existing system comprising of Texture Segmentation and ANN (Artificial Neural Networks) Algorithm. SVM is a simple Machine Learning algorithm when compared to ANN. The outcome of the paper is to segment and classify the Malignancy from the Non-Malignant region using the classifier SVM. SVM performs the classification based on the dataset that contains the trained images

    Deep Learning of Unified Region, Edge, and Contour Models for Automated Image Segmentation

    Full text link
    Image segmentation is a fundamental and challenging problem in computer vision with applications spanning multiple areas, such as medical imaging, remote sensing, and autonomous vehicles. Recently, convolutional neural networks (CNNs) have gained traction in the design of automated segmentation pipelines. Although CNN-based models are adept at learning abstract features from raw image data, their performance is dependent on the availability and size of suitable training datasets. Additionally, these models are often unable to capture the details of object boundaries and generalize poorly to unseen classes. In this thesis, we devise novel methodologies that address these issues and establish robust representation learning frameworks for fully-automatic semantic segmentation in medical imaging and mainstream computer vision. In particular, our contributions include (1) state-of-the-art 2D and 3D image segmentation networks for computer vision and medical image analysis, (2) an end-to-end trainable image segmentation framework that unifies CNNs and active contour models with learnable parameters for fast and robust object delineation, (3) a novel approach for disentangling edge and texture processing in segmentation networks, and (4) a novel few-shot learning model in both supervised settings and semi-supervised settings where synergies between latent and image spaces are leveraged to learn to segment images given limited training data.Comment: PhD dissertation, UCLA, 202

    A SURVEY ON COLOR IMAGE SEGMENTATION THROUGH LEAKY INTEGRATE AND FIRE MODEL OF SPIKING NEURAL NETWORKS

    Get PDF
    Neurological research shows that the biological neurons store information in the timing of spikes. Spiking neural networks are the third generation of neural networks which take into account the precise firing time of neurons for information encoding. In SNNs, computation is performed in the temporal (time related) domain and relies on the timings between spikes. The leaky integrate-and-fire neuron is probably the best-known example of a formal spiking neuron model. In this paper, we have simulated LIF model of SNN for performing the image segmentation using K-Means clustering. Clustering can be termed here as a grouping of similar images in the database. Clustering is done based on different attributes of an image such as size, color, texture etc. The purpose of clustering is to get meaningful result, effective storage and fast retrieval in various areas. Image segmentation is the first step and also one of the most critical tasks of image analysis .Because of its simplicity and efficiency, clustering approach is used for the segmentation of (textured) natural images. After the extraction of the image features using wavelet; the feature samples, handled as vectors, are grouped together in compact but well-separated clusters corresponding to each class of the image. Simulation results therefore demonstrate how SNN can be applied with efficacy in Image Segmentation

    Convolutional Neural Network on Three Orthogonal Planes for Dynamic Texture Classification

    Get PDF
    Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Deep learning methods have shown impressive results and are now the new state of the art for a wide range of computer vision tasks including image and video recognition and segmentation. In particular, Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to a filter bank approach. In this paper, we develop a new approach to DT analysis based on a CNN method applied on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their outputs to obtain a competitive DT classifier. Our results on a wide range of commonly used DT classification benchmark datasets prove the robustness of our approach. Significant improvement of the state of the art is shown on the larger datasets.Comment: 19 pages, 10 figure
    corecore