14 research outputs found

    Solution of Dual Fuzzy Equations Using a New Iterative Method

    Get PDF
    In this paper, a new hybrid scheme based on learning algorithm of fuzzy neural network (FNN) is offered in order to extract the approximate solution of fully fuzzy dual polynomials (FFDPs). Our FNN in this paper is a five-layer feed-back FNN with the identity activation function. The input-output relation of each unit is defined by the extension principle of Zadeh. The output from this neural network, which is also a fuzzy number, is numerically compared with the target output. The comparison of the feed-back FNN method with the feed-forward FNN method shows that the less error is observed in the feed-back FNN method. An example based on applications are given to illustrate the concepts, which are discussed in this paper

    Assessing the Performance of a Speech Recognition System Embedded in Low-Cost Devices

    Get PDF
    The main purpose of this research is to investigate how an Amazigh speech recognition system can be integrated into a low-cost minicomputer, specifically the Raspberry Pi, in order to improve the system\u27s automatic speech recognition capabilities. The study focuses on optimizing system parameters to achieve a balance between performance and limited system resources. To achieve this, the system employs a combination of Hidden Markov Models (HMMs), Gaussian Mixture Models (GMMs), and Mel Frequency Spectral Coefficients (MFCCs) with a speaker-independent approach. The system has been developed to recognize 20 Amazigh words, comprising of 10 commands and the first ten Amazigh digits. The results indicate that the recognition rate achieved on the Raspberry Pi system is 89.16% using 3 HMMs, 16 GMMs, and 39 MFCC coefficients. These findings demonstrate that it is feasible to create effective embedded Amazigh speech recognition systems using a low-cost minicomputer such as the Raspberry Pi. Furthermore, Amazigh linguistic analysis has been implemented to ensure the accuracy of the designed embedded speech system

    List of 121 papers citing one or more skin lesion image datasets

    Get PDF

    Automatic quantification of abdominal subcutaneous and visceral adipose tissue in children, through MRI study, using total intensity maps and Convolutional Neural Networks

    Full text link
    Childhood overweight and obesity is one of the main health problems in the world since it is related to the early appearance of different diseases, in addition to being a risk factor for later developing obesity in adulthood with its health and economic consequences. Visceral abdominal tissue (VAT) is strongly related to the development of metabolic and cardiovascular diseases compared to abdominal subcutaneous adipose tissue (ASAT). Therefore, precise and automatic VAT and ASAT quantification methods would allow better diagnosis, monitoring and prevention of diseases caused by obesity at any stage of life. Currently, magnetic resonance imaging is the standard for fat quantification, with Dixon sequences being the most useful. Different semiautomatic and automatic ASAT and VAT quantification methodologies have been proposed. In particular, the semi-automated quantification methodology used commercially through the cloud-based service AMRA R Researcher stands out due to its extensive validation in different studies. In the present work, a database made up of Dixon MRI sequences, obtained from children between 7 and 9 years of age, was studied. Applying a preprocessing to obtain what we call total intensity maps, a convolutional neural network (CNN) was proposed for the automatic quantification of ASAT and VAT. The quantifications obtained from the proposed methodology were compared with quantifications previously made through AMRA R Researcher. For the comparison, correlation analysis, Bland-Altman graphs and non-parametric statistical tests were used. The results indicated a high correlation and similar precisions between the quantifications of this work and those of AMRA R Researcher. The final objective is that the proposed methodology can serve as an accessible and free tool for the diagnosis, monitoring and prevention of diseases related to childhood obesity.Comment: 14 pages, 9 figures, 3 table

    Application of Machine Learning in Melanoma Detection and the Identification of 'Ugly Duckling' and Suspicious Naevi: A Review

    Full text link
    Skin lesions known as naevi exhibit diverse characteristics such as size, shape, and colouration. The concept of an "Ugly Duckling Naevus" comes into play when monitoring for melanoma, referring to a lesion with distinctive features that sets it apart from other lesions in the vicinity. As lesions within the same individual typically share similarities and follow a predictable pattern, an ugly duckling naevus stands out as unusual and may indicate the presence of a cancerous melanoma. Computer-aided diagnosis (CAD) has become a significant player in the research and development field, as it combines machine learning techniques with a variety of patient analysis methods. Its aim is to increase accuracy and simplify decision-making, all while responding to the shortage of specialized professionals. These automated systems are especially important in skin cancer diagnosis where specialist availability is limited. As a result, their use could lead to life-saving benefits and cost reductions within healthcare. Given the drastic change in survival when comparing early stage to late-stage melanoma, early detection is vital for effective treatment and patient outcomes. Machine learning (ML) and deep learning (DL) techniques have gained popularity in skin cancer classification, effectively addressing challenges, and providing results equivalent to that of specialists. This article extensively covers modern Machine Learning and Deep Learning algorithms for detecting melanoma and suspicious naevi. It begins with general information on skin cancer and different types of naevi, then introduces AI, ML, DL, and CAD. The article then discusses the successful applications of various ML techniques like convolutional neural networks (CNN) for melanoma detection compared to dermatologists' performance. Lastly, it examines ML methods for UD naevus detection and identifying suspicious naevi

    Convolutional Neural Network to Classify Infrared Thermal Images of Fractured Wrists in Pediatrics

    Get PDF
    Convolutional neural network (CNN) models were devised and evaluated to classify infrared thermal (IRT) images of pediatric wrist fractures. The images were recorded from 19 participants with a wrist fracture and 21 without a fracture (sprain). The injury diagnosis was by X-ray radiography. For each participant, 299 IRT images of their wrists were recorded. These generated 11,960 images (40 participants × 299 images). For each image, the wrist region of interest (ROI) was selected and fast Fourier transformed (FFT) to obtain a magnitude frequency spectrum. The spectrum was resized to 100 × 100 pixels from its center as this region represented the main frequency components. Image augmentations of rotation, translation and shearing were applied to the 11,960 magnitude frequency spectra to assist with the CNN generalization during training. The CNN had 34 layers associated with convolution, batch normalization, rectified linear unit, maximum pooling and SoftMax and classification. The ratio of images for the training and test was 70:30, respectively. The effects of augmentation and dropout on CNN performance were explored. Wrist fracture identification sensitivity and accuracy of 88% and 76%, respectively, were achieved. The CNN model was able to identify wrist fractures; however, a larger sample size would improve accuracy

    A novel approach to the Orienteering Problem based on the Harmony Search algorithm

    Get PDF
    This article presents a new approach to designing a Harmony Search (HS) algorithm, adapted to solve Orienteering Problem (OP) instances. OP is a significant N P-hard problem that has considerable practical application, requiring the development of an effective method for determining its solutions. The proposed HS has demonstrated its effectiveness through determined optimum results for each task from the six most popular benchmarks; a marginal number approximated the best results, with the average error below 0.01%. The article details the application of this described algorithm, comparing its results with those of state-of-the-art methods, indicating the significant efficiency of the proposed approach

    Agent programming in the cognitive era

    Get PDF
    It is claimed that, in the nascent ‘Cognitive Era’, intelligent systems will be trained using machine learning techniques rather than programmed by software developers. A contrary point of view argues that machine learning has limitations, and, taken in isolation, cannot form the basis of autonomous systems capable of intelligent behaviour in complex environments. In this paper, we explore the contributions that agent-oriented programming can make to the development of future intelligent systems. We briefly review the state of the art in agent programming, focussing particularly on BDI-based agent programming languages, and discuss previous work on integrating AI techniques (including machine learning) in agent-oriented programming. We argue that the unique strengths of BDI agent languages provide an ideal framework for integrating the wide range of AI capabilities necessary for progress towards the next-generation of intelligent systems. We identify a range of possible approaches to integrating AI into a BDI agent architecture. Some of these approaches, e.g., ‘AI as a service’, exploit immediate synergies between rapidly maturing AI techniques and agent programming, while others, e.g., ‘AI embedded into agents’ raise more fundamental research questions, and we sketch a programme of research directed towards identifying the most appropriate ways of integrating AI capabilities into agent programs

    Design and Implementation of Selected Evolution Strategies for Optimization of Regional Segmentation Models with the Aim of Objects Identification from Medical Images

    Get PDF
    Tématem této diplomové práce je testování efektivity segmentačních algoritmů při segmentaci medicínských obrazových dat, jejichž akvizice byla provedena pomocí MRI, fundus kamery a ultrazvuku. V druhé části práce týkající se segmentace vybraných objektů zájmu byly použity snímky CT, MRI a ultrazvuku. Šum v obraze představuje nežádoucí aditivní složku, která mění jasovou intenzitu pixelů, a mohou tak při klasifikaci pixelů do jednotlivých segmentačních regionů vznikat chyby. V práci byla pro testování medicínských snímků dvojice použita dvojice algoritmů Fuzzy-ABC a F-FCM, které stojí na principu fuzzy logiky a jsou doplněny o lokální statistickou agregaci pro účely potlačení vlivu šumu. Další dvojicí algoritmů představují metody K-means a Otsu prahování. Tyto dva algoritmy se řadí mezi tzv. konvenční algoritmy a jejich segmentační efektivita byla porovnána s efektivitou obou fuzzy algoritmů. Teoretická část práce je stručně věnována základním principům segmentace obrazových dat a vybraných evolučních strategií pro segmentaci obrazu. Byla provedena také rešerše týkající se segmentace obrazu optimalizované pomocí evolučních strategií. Hlavním cílem práce byla analýza efektivity a robustnosti segmentačních metod v kontextu variabilního deterministického šumu s dynamickou intenzitou a následná komparativní analýza a modelování efektivity segmentace testovaných metod v závislosti na parametrech segmentačních strategiích. Testovány byly obrazy obsahující Gaussovský šum, Salt&Pepper a Speckle. K evaluaci výsledků byly použity objektivní evaluační metody MSE, korelace a SSIM.The topic of this diploma thesis is testing the effectiveness of segmentation algorithms in the segmentation of medical image data, the acquisition of which was performed using MRI, fundus camera and ultrasound. In the second part of the work dealing with the segmentation of selected objects of interest, CT, MRI and ultrasound images were used. Noise in the image is an undesirable additive component that changes the brightness intensity of the pixels, and thus errors can occur when classifying pixels into individual segmentation regions. A pair of Fuzzy-ABC and F-FCM algorithms, which are based on the principle of fuzzy logic, were tested in this work. These algorithms overcome the problem of pixel misclassification caused by local statistical aggregation. Another pair of algorithms are the K-means and Otsu thresholding methods. These two algorithms are so-called conventional algorithms, and their segmentation efficiency was compared with the efficiency of both fuzzy algorithms. The theoretical part of the work is briefly devoted to the basic principles of image data segmentation and selected evolutionary strategies for image segmentation. A review of such evolutionary strategies used for image segmentation was also made. The main goal of the work was to analyze the effectiveness and robustness of segmentation methods in the context of variable deterministic noise (gaussian, salt&pepper, speckle) with dynamic intensity and subsequent comparative analysis and modeling of the effectiveness of segmentation of tested methods depending on the parameters of segmentation strategies. Objective evaluation methods were used to evaluate the results (corelation, MSE and SSIM).450 - Katedra kybernetiky a biomedicínského inženýrstvívýborn
    corecore