41 research outputs found

    An Overview of Inflammatory Spondylitis for Biomedical Imaging Using Deep Neural Networks

    Get PDF
    Ankylosing Spondylitis (AS) is an axial spine inflammatory illness and also chronic that might present with a range of clinical symptoms and indicators. The illness is most frequently characterized by increasing spinal stiffness and persistent back discomfort. The affect of the sacroiliac joints, spine, peripheral joints, entheses and digits are the main cause of the illness. AS symptoms include reduced spinal mobility, aberrant posture, hip and dactylitis, enthesitis, peripheral arthritis, and buttock pain. With their exceptional picture classification ability, the diagnosis of AS illness has been transformed by deep learning techniques in artificial intelligence (AI). Despite the excellent results, these processes are still being widely used in clinical practice at a moderate rate. Due to security and health concerns, medical imaging applications utilizing deep learning must be viewed with caution. False instances, whether good or negative, have far-reaching effects on the well-being of patients and these are to be considered. These are extracted from the fact of the state-of-the-art of deep learning (DL) algorithms lack internal workings comprehension and have complicated interconnected structure, huge millions of parameters, and also a "black box" aspect compared to conventional machine learning (ML) algorithms. XAI (Explainable AI) approaches make it easier to comprehend model predictions, which promotes system reliability, speeds up the diagnosis of the AS disease, and complies with legal requirements

    KNN Algorithm for Identification of Tomato Disease Based on Image Segmentation Using Enhanced K-Means Clustering

    Get PDF
    Image segmentation is an important process in identifying tomato diseases. The technique that is often used in this segmentation is k-means clustering. One of the main problems in this technique is the case of local minima, where the cluster that is formed is not suitable due to the incorrect selection of the initial centroid. In image data, this case will have an impact on poor segmentation results because it can erase parts that are actually important to be lost or there is still background in the recognition process, which has an impact on decreasing accuracy results. In this research, a method for image segmentation will be proposed using the k-means clustering algorithm, which has been added with the cosine similarity method as the proposed contribution. The use of the cosine method will determine the initial centroid by calculating the level of similarity of each image feature based on color and dividing them into several categories (low, medium, and high values). Based on the results obtained, the proposed algorithm is able to segment and distinguish between leaf and background images with good results, with the kNN reaching a value of 94.90% for accuracy, 99.50% for sensitivity, and 93.75% for specificity. The results obtained using the kNN method with k-means segmentation obtained a value of 92.46% for accuracy, 96.30% for sensitivity, and 91.50% for specificity. The results obtained using the kNN method without segmentation obtained a value of 90.22% for accuracy, 93.30% for sensitivity, and 89.45% for specificity

    Machine learning methods for discriminating natural targets in seabed imagery

    Get PDF
    The research in this thesis concerns feature-based machine learning processes and methods for discriminating qualitative natural targets in seabed imagery. The applications considered, typically involve time-consuming manual processing stages in an industrial setting. An aim of the research is to facilitate a means of assisting human analysts by expediting the tedious interpretative tasks, using machine methods. Some novel approaches are devised and investigated for solving the application problems. These investigations are compartmentalised in four coherent case studies linked by common underlying technical themes and methods. The first study addresses pockmark discrimination in a digital bathymetry model. Manual identification and mapping of even a relatively small number of these landform objects is an expensive process. A novel, supervised machine learning approach to automating the task is presented. The process maps the boundaries of ≈ 2000 pockmarks in seconds - a task that would take days for a human analyst to complete. The second case study investigates different feature creation methods for automatically discriminating sidescan sonar image textures characteristic of Sabellaria spinulosa colonisation. Results from a comparison of several textural feature creation methods on sonar waterfall imagery show that Gabor filter banks yield some of the best results. A further empirical investigation into the filter bank features created on sonar mosaic imagery leads to the identification of a useful configuration and filter parameter ranges for discriminating the target textures in the imagery. Feature saliency estimation is a vital stage in the machine process. Case study three concerns distance measures for the evaluation and ranking of features on sonar imagery. Two novel consensus methods for creating a more robust ranking are proposed. Experimental results show that the consensus methods can improve robustness over a range of feature parameterisations and various seabed texture classification tasks. The final case study is more qualitative in nature and brings together a number of ideas, applied to the classification of target regions in real-world sonar mosaic imagery. A number of technical challenges arose and these were surmounted by devising a novel, hybrid unsupervised method. This fully automated machine approach was compared with a supervised approach in an application to the problem of image-based sediment type discrimination. The hybrid unsupervised method produces a plausible class map in a few minutes of processing time. It is concluded that the versatile, novel process should be generalisable to the discrimination of other subjective natural targets in real-world seabed imagery, such as Sabellaria textures and pockmarks (with appropriate features and feature tuning.) Further, the full automation of pockmark and Sabellaria discrimination is feasible within this framework

    A Survey on Unsupervised Anomaly Detection Algorithms for Industrial Images

    Full text link
    In line with the development of Industry 4.0, surface defect detection/anomaly detection becomes a topical subject in the industry field. Improving efficiency as well as saving labor costs has steadily become a matter of great concern in practice, where deep learning-based algorithms perform better than traditional vision inspection methods in recent years. While existing deep learning-based algorithms are biased towards supervised learning, which not only necessitates a huge amount of labeled data and human labor, but also brings about inefficiency and limitations. In contrast, recent research shows that unsupervised learning has great potential in tackling the above disadvantages for visual industrial anomaly detection. In this survey, we summarize current challenges and provide a thorough overview of recently proposed unsupervised algorithms for visual industrial anomaly detection covering five categories, whose innovation points and frameworks are described in detail. Meanwhile, publicly available datasets for industrial anomaly detection are introduced. By comparing different classes of methods, the advantages and disadvantages of anomaly detection algorithms are summarized. Based on the current research framework, we point out the core issue that remains to be resolved and provide further improvement directions. Meanwhile, based on the latest technological trends, we offer insights into future research directions. It is expected to assist both the research community and industry in developing a broader and cross-domain perspective

    Intelligent X-ray imaging inspection system for the food industry.

    Get PDF
    The inspection process of a product is an important stage of a modern production factory. This research presents a generic X-ray imaging inspection system with application for the detection of foreign bodies in a meat product for the food industry. The most important modules in the system are the image processing module and the high-level detection system. This research discusses the use of neural networks for image processing and fuzzy-logic for the detection of potential foreign bodies found in x-ray images of chicken breast meat after the de-boning process. The meat product is passed under a solid-state x-ray sensor that acquires a dual-band two-dimensional image of the meat (a low- and a high energy image). A series of image processing operations are applied to the acquired image (pre-processing, noise removal, contrast enhancement). The most important step of the image processing is the segmentation of the image into meaningful objects. The segmentation task is a difficult one due to the lack of clarity of the acquired X-ray images and the resulting segmented image represents not only correctly identified foreign bodies but also areas caused by overlapping muscle regions in the meat which appear very similar to foreign bodies in the resulting x-ray image. A Hopfield neural network architecture was proposed for the segmentation of a X-ray dual-band image. A number of image processing measurements were made on each object (geometrical and grey-level based statistical features) and these features were used as the input into a fuzzy logic based high-level detection system whose function was to differentiate between bones and non-bone segmented regions. The results show that system's performance is considerably improved over non-fuzzy or crisp methods. Possible noise affecting the system is also investigated. The proposed system proved to be robust and flexible while achieving a high level of performance. Furthermore, it is possible to use the same approach when analysing images from other applications areas from the automotive industry to medicine

    Intelligent X-ray imaging inspection system for the food industry.

    Get PDF
    The inspection process of a product is an important stage of a modern production factory. This research presents a generic X-ray imaging inspection system with application for the detection of foreign bodies in a meat product for the food industry. The most important modules in the system are the image processing module and the high-level detection system. This research discusses the use of neural networks for image processing and fuzzy-logic for the detection of potential foreign bodies found in x-ray images of chicken breast meat after the de-boning process. The meat product is passed under a solid-state x-ray sensor that acquires a dual-band two-dimensional image of the meat (a low- and a high energy image). A series of image processing operations are applied to the acquired image (pre-processing, noise removal, contrast enhancement). The most important step of the image processing is the segmentation of the image into meaningful objects. The segmentation task is a difficult one due to the lack of clarity of the acquired X-ray images and the resulting segmented image represents not only correctly identified foreign bodies but also areas caused by overlapping muscle regions in the meat which appear very similar to foreign bodies in the resulting x-ray image. A Hopfield neural network architecture was proposed for the segmentation of a X-ray dual-band image. A number of image processing measurements were made on each object (geometrical and grey-level based statistical features) and these features were used as the input into a fuzzy logic based high-level detection system whose function was to differentiate between bones and non-bone segmented regions. The results show that system's performance is considerably improved over non-fuzzy or crisp methods. Possible noise affecting the system is also investigated. The proposed system proved to be robust and flexible while achieving a high level of performance. Furthermore, it is possible to use the same approach when analysing images from other applications areas from the automotive industry to medicine

    A Comprehensive Review of Vehicle Detection Techniques Under Varying Moving Cast Shadow Conditions Using Computer Vision and Deep Learning

    Get PDF
    Design of a vision-based traffic analytic system for urban traffic video scenes has a great potential in context of Intelligent Transportation System (ITS). It offers useful traffic-related insights at much lower costs compared to their conventional sensor based counterparts. However, it remains a challenging problem till today due to the complexity factors such as camera hardware constraints, camera movement, object occlusion, object speed, object resolution, traffic flow density, and lighting conditions etc. ITS has many applications including and not just limited to queue estimation, speed detection and different anomalies detection etc. All of these applications are primarily dependent on sensing vehicle presence to form some basis for analysis. Moving cast shadows of vehicles is one of the major problems that affects the vehicle detection as it can cause detection and tracking inaccuracies. Therefore, it is exceedingly important to distinguish dynamic objects from their moving cast shadows for accurate vehicle detection and recognition. This paper provides an in-depth comparative analysis of different traffic paradigm-focused conventional and state-of-the-art shadow detection and removal algorithms. Till date, there has been only one survey which highlights the shadow removal methodologies particularly for traffic paradigm. In this paper, a total of 70 research papers containing results of urban traffic scenes have been shortlisted from the last three decades to give a comprehensive overview of the work done in this area. The study reveals that the preferable way to make a comparative evaluation is to use the existing Highway I, II, and III datasets which are frequently used for qualitative or quantitative analysis of shadow detection or removal algorithms. Furthermore, the paper not only provides cues to solve moving cast shadow problems, but also suggests that even after the advent of Convolutional Neural Networks (CNN)-based vehicle detection methods, the problems caused by moving cast shadows persists. Therefore, this paper proposes a hybrid approach which uses a combination of conventional and state-of-the-art techniques as a pre-processing step for shadow detection and removal before using CNN for vehicles detection. The results indicate a significant improvement in vehicle detection accuracies after using the proposed approach

    Multi-fractal dimension features by enhancing and segmenting mammogram images of breast cancer

    Get PDF
    The common malignancy which causes deaths in women is breast cancer. Early detection of breast cancer using mammographic image can help in reducing the mortality rate and the probability of recurrence. Through mammographic examination, breast lesions can be detected and classified. Breast lesions can be detected using many popular tools such as Magnetic Resonance Imaging (MRI), ultrasonography, and mammography. Although mammography is very useful in the diagnosis of breast cancer, the pattern similarities between normal and pathologic cases makes the process of diagnosis difficult. Therefore, in this thesis Computer Aided Diagnosing (CAD) systems have been developed to help doctors and technicians in detecting lesions. The thesis aims to increase the accuracy of diagnosing breast cancer for optimal classification of cancer. It is achieved using Machine Learning (ML) and image processing techniques on mammogram images. This thesis also proposes an improvement of an automated extraction of powerful texture sign for classification by enhancing and segmenting the breast cancer mammogram images. The proposed CAD system consists of five stages namely pre-processing, segmentation, feature extraction, feature selection, and classification. First stage is pre-processing that is used for noise reduction due to noises in mammogram image. Therefore, based on the frequency domain this thesis employed wavelet transform to enhance mammogram images in pre-processing stage for two purposes which is to highlight the border of mammogram images for segmentation stage, and to enhance the region of interest (ROI) using adaptive threshold in the mammogram images for feature extraction purpose. Second stage is segmentation process to identify ROI in mammogram images. It is a difficult task because of several landmarks such as breast boundary and artifacts as well as pectoral muscle in Medio-Lateral Oblique (MLO). Thus, this thesis presents an automatic segmentation algorithm based on new thresholding combined with image processing techniques. Experimental results demonstrate that the proposed model increases segmentation accuracy of the ROI from breast background, landmarks, and pectoral muscle. Third stage is feature extraction where enhancement model based on fractal dimension is proposed to derive significant mammogram image texture features. Based on the proposed, model a powerful texture sign for classification are extracted. Fourth stage is feature selection where Genetic Algorithm (GA) technique has been used as a feature selection technique to select the important features. In last classification stage, Artificial Neural Network (ANN) technique has been used to differentiate between Benign and Malignant classes of cancer using the most relevant texture feature. As a conclusion, classification accuracy, sensitivity, and specificity obtained by the proposed CAD system are improved in comparison to previous studies. This thesis has practical contribution in identification of breast cancer using mammogram images and better classification accuracy of benign and malign lesions using ML and image processing techniques

    A review on deep-learning-based cyberbullying detection

    Get PDF
    Bullying is described as an undesirable behavior by others that harms an individual physically, mentally, or socially. Cyberbullying is a virtual form (e.g., textual or image) of bullying or harassment, also known as online bullying. Cyberbullying detection is a pressing need in today’s world, as the prevalence of cyberbullying is continually growing, resulting in mental health issues. Conventional machine learning models were previously used to identify cyberbullying. However, current research demonstrates that deep learning surpasses traditional machine learning algorithms in identifying cyberbullying for several reasons, including handling extensive data, efficiently classifying text and images, extracting features automatically through hidden layers, and many others. This paper reviews the existing surveys and identifies the gaps in those studies. We also present a deep-learning-based defense ecosystem for cyberbullying detection, including data representation techniques and different deep-learning-based models and frameworks. We have critically analyzed the existing DL-based cyberbullying detection techniques and identified their significant contributions and the future research directions they have presented. We have also summarized the datasets being used, including the DL architecture being used and the tasks that are accomplished for each dataset. Finally, several challenges faced by the existing researchers and the open issues to be addressed in the future have been presented
    corecore