12 research outputs found

    A Review of Skin Melanoma Detection Based on Machine Learning

    Get PDF
    Dermatological malignancies, such as skin cancer, are the most extensively known kinds of human malignancies in people with fair skin. Despite the fact that malignant melanoma is the type of skin cancer that is associated with the highest mortality rate, the non-melanoma skin tumors are unquestionably normal. The frequency of both melanoma and non-melanoma skin cancers is increasing, and the number of cases being studied is increasing at a reasonably regular period, according to the National Cancer Institute. Early detection of skin cancer can help patient’s live longer lives by reducing their mortality rate. In this research, we will look at various approaches for initiating period melanoma skin cancer detection and compare them. Pathologists use biopsies to diagnose skin lesions, and they base their decisions on cell life systems and tissue transport in many cases. However, in many cases, the decision is emotional, and it commonly results in significant changeability. The application of quantitative measures by PC diagnostic devices, on the other hand, allows for more accurate target judgment. This research examines the preceding period as well as current advancements in the field of machine-aided skin cancer detection (MASCD)

    Part-aware Prototype Network for Few-shot Semantic Segmentation

    Full text link
    Few-shot semantic segmentation aims to learn to segment new object classes with only a few annotated examples, which has a wide range of real-world applications. Most existing methods either focus on the restrictive setting of one-way few-shot segmentation or suffer from incomplete coverage of object regions. In this paper, we propose a novel few-shot semantic segmentation framework based on the prototype representation. Our key idea is to decompose the holistic class representation into a set of part-aware prototypes, capable of capturing diverse and fine-grained object features. In addition, we propose to leverage unlabeled data to enrich our part-aware prototypes, resulting in better modeling of intra-class variations of semantic objects. We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes based on labeled and unlabeled images. Extensive experimental evaluations on two benchmarks show that our method outperforms the prior art with a sizable margin.Comment: ECCV-202

    Leveraging siamese networks for one-shot intrusion detection model

    Get PDF
    The use of supervised Machine Learning (ML) to enhance Intrusion Detection Systems (IDS) has been the subject of significant research. Supervised ML is based upon learning by example, demanding significant volumes of representative instances for effective training and the need to retrain the model for every unseen cyber-attack class. However, retraining the models in-situ renders the network susceptible to attacks owing to the time-window required to acquire a sufficient volume of data. Although anomaly detection systems provide a coarse-grained defence against unseen attacks, these approaches are significantly less accurate and suffer from high false-positive rates. Here, a complementary approach referred to as “One-Shot Learning”, whereby a limited number of examples of a new attack-class is used to identify a new attack-class (out of many) is detailed. The model grants a new cyber-attack classification opportunity for classes that were not seen during training without retraining. A Siamese Network is trained to differentiate between classes based on pairs similarities, rather than features, allowing to identify new and previously unseen attacks. The performance of a pre-trained model to classify new attack-classes based only on one example is evaluated using three mainstream IDS datasets; CICIDS2017, NSL-KDD, and KDD Cup’99. The results confirm the adaptability of the model in classifying unseen attacks and the trade-off between performance and the need for distinctive class representations.</p

    A Parameter-Efficient Deep Dense Residual Convolutional Neural Network for Volumetric Brain Tissue Segmentation from Magnetic Resonance Images

    Get PDF
    Brain tissue segmentation is a common medical image processing problem that deals with identifying a region of interest in the human brain from medical scans. It is a fundamental step towards neuroscience research and clinical diagnosis. Magnetic resonance (MR) images are widely used for segmentation in view of their non-invasive acquisition, and high spatial resolution and various contrast information. Accurate segmentation of brain tissues from MR images is very challenging due to the presence of motion artifacts, low signal-to-noise ratio, intensity overlaps, and intra- and inter-subject variability. Convolutional neural networks (CNNs) recently employed for segmentation provide remarkable advantages over the traditional and manual segmentation methods, however, their complex architectures and the large number of parameters make them computationally expensive and difficult to optimize. In this thesis, a novel learning-based algorithm using a three-dimensional deep convolutional neural network is proposed for efficient parameter reduction and compact feature representation to learn end-to-end mapping of T1-weighted (T1w) and/or T2-weighted (T2w) brain MR images to the probability scores of each voxel belonging to the different labels of brain tissues, namely, white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) for segmentation. The basic idea in the proposed method is to use densely connected convolutional layers and residual skip-connections to increase representation capacity, facilitate better gradient flow, improve learning, and significantly reduce the number of parameters in the network. The network is independently trained on three different loss functions, cross-entropy, dice similarity, and a combination of the two and the results are compared with each other to investigate better loss function for the training. The model has the number of network parameters reduced by a significant amount compared to that of the state-of-the-art methods in brain tissue segmentation. Experiments are performed using the single-modality IBSR18 dataset containing high-resolution T1-weighted MR scans of diverse age groups, and the multi-modality iSeg-2017 dataset containing T1w and T2w MR scans of infants. It is shown that the proposed method provides the best performance on the test sets of both datasets amongst all the existing deep-learning based methods for brain tissue segmentation using the MR images and achieves competitive performance in the iSeg-2017 challenge with the number of parameters that is 47% to 98% lower than that of the other deep-learning based architectures
    corecore