77 research outputs found

    Toward enhancement of deep learning techniques using fuzzy logic: a survey

    Get PDF
    Deep learning has emerged recently as a type of artificial intelligence (AI) and machine learning (ML), it usually imitates the human way in gaining a particular knowledge type. Deep learning is considered an essential data science element, which comprises predictive modeling and statistics. Deep learning makes the processes of collecting, interpreting, and analyzing big data easier and faster. Deep neural networks are kind of ML models, where the non-linear processing units are layered for the purpose of extracting particular features from the inputs. Actually, the training process of similar networks is very expensive and it also depends on the used optimization method, hence optimal results may not be provided. The techniques of deep learning are also vulnerable to data noise. For these reasons, fuzzy systems are used to improve the performance of deep learning algorithms, especially in combination with neural networks. Fuzzy systems are used to improve the representation accuracy of deep learning models. This survey paper reviews some of the deep learning based fuzzy logic models and techniques that were presented and proposed in the previous studies, where fuzzy logic is used to improve deep learning performance. The approaches are divided into two categories based on how both of the samples are combined. Furthermore, the models' practicality in the actual world is revealed

    Analysis and automated classification of images of blood cells to diagnose acute lymphoblastic leukemia

    Get PDF
    Analysis of white blood cells from blood can help to detect Acute Lymphoblastic Leukemia, a potentially fatal blood cancer if left untreated. The morphological analysis of blood cells images is typically performed manually by an expert; however, this method has numerous drawbacks, including slow analysis, low precision, and the results depend on the operator’s skill. We have developed and present here an automated method for the identification and classification of white blood cells using microscopic images of peripheral blood smears. Once the image has been obtained, we propose describing it using brightness, contrast, and micro-contour orientation histograms. Each of these descriptions provides a coding of the image, which in turn provides n parameters. The extracted characteristics are presented to an encoder’s input. The encoder generates a high-dimensional binary output vector, which is presented to the input of the neural classifier. This paper presents the performance of one classifier, the Random Threshold Classifier. The classifier’s output is the recognized class, which is either a healthy cell or an Acute Lymphoblastic Leukemia-affected cell. As shown below, the proposed neural Random Threshold Classifier achieved a recognition rate of 98.3 % when the data has partitioned on 80 % training set and 20 % testing set for. Our system of image recognition is evaluated using the public dataset of peripheral blood samples from Acute Lymphoblastic Leukemia Image Database. It is important to mention that our system could be implemented as a computational tool for detection of other diseases, where blood cells undergo alterations, such as Covid-1

    Machine-learning-based condition assessment of gas turbine: a review

    Get PDF
    Condition monitoring, diagnostics, and prognostics are key factors in today’s competitive industrial sector. Equipment digitalisation has increased the amount of available data throughout the industrial process, and the development of new and more advanced techniques has significantly improved the performance of industrial machines. This publication focuses on surveying the last decade of evolution of condition monitoring, diagnostic, and prognostic techniques using machinelearning (ML)-based models for the improvement of the operational performance of gas turbines. A comprehensive review of the literature led to a performance assessment of ML models and their applications to gas turbines, as well as a discussion of the major challenges and opportunities for the research on these kind of engines. This paper further concludes that the combination of the available information captured through the collectors and the ML techniques shows promising results in increasing the accuracy, robustness, precision, and generalisation of industrial gas turbine equipment.This research was funded by Siemens Energy.Peer ReviewedPostprint (published version

    Heath-PRIOR: An Intelligent Ensemble Architecture to Identify Risk Cases in Healthcare

    Get PDF
    Smart city environments, when applied to healthcare, improve the quality of people\u27s lives, enabling, for instance, disease prediction and treatment monitoring. In medical settings, case prioritization is of great importance, with beneficial outcomes both in terms of patient health and physicians\u27 daily work. Recommender systems are an alternative to automatically integrate the data generated in such environments with predictive models and recommend actions, content, or services. The data produced by smart devices are accurate and reliable for predictive and decision-making contexts. This study main purpose is to assist patients and doctors in the early detection of disease or prediction of postoperative worsening through constant monitoring. To achieve this objective, this study proposes an architecture for recommender systems applied to healthcare, which can prioritize emergency cases. The architecture brings an ensemble approach for prediction, which adopts multiple Machine Learning algorithms. The methodology used to carry out the study followed three steps. First, a systematic literature mapping, second, the construction and development of the architecture, and third, the evaluation through two case studies. The results demonstrated the feasibility of the proposal. The predictions are promising and adherent to the application context for accurate datasets with a low amount of noises or missing values

    A Cloud-Edge-aided Incremental High-order Possibilistic c-Means Algorithm for Medical Data Clustering

    Get PDF
    Medical Internet of Things are generating a big volume of data to enable smart medicine that tries to offer computer-aided medical and healthcare services with artificial intelligence techniques like deep learning and clustering. However, it is a challenging issue for deep learning and clustering algorithms to analyze large medical data because of their high computational complexity, thus hindering the progress of smart medicine. In this paper, we present an incremental high-order possibilistic c-means algorithm on a cloud-edge computing system to achieve medical data co-clustering of multiple hospitals in different locations. Specifically, each hospital employs the deep computation model to learn a feature tensor of each medical data object on the local edge computing system and then uploads the feature tensors to the cloud computing platform. The high-order possibilistic c-means algorithm (HoPCM) is performed on the cloud system for medical data clustering on uploaded feature tensors. Once the new medical data feature tensors are arriving at the cloud computing platform, the incremental high-order possibilistic c-means algorithm (IHoPCM) is performed on the combination of the new feature tensors and the previous clustering centers to obtain clustering results for the feature tensors received to date. In this way, repeated clustering on the previous feature tensors is avoided to improve the clustering efficiency. In the experiments, we compare different algorithms on two medical datasets regarding clustering accuracy and clustering efficiency. Results show that the presented IHoPCM method achieves great improvements over the compared algorithms in clustering accuracy and efficiency

    Role of sentiment classification in sentiment analysis: a survey

    Get PDF
    Through a survey of literature, the role of sentiment classification in sentiment analysis has been reviewed. The review identifies the research challenges involved in tackling sentiment classification. A total of 68 articles during 2015 – 2017 have been reviewed on six dimensions viz., sentiment classification, feature extraction, cross-lingual sentiment classification, cross-domain sentiment classification, lexica and corpora creation and multi-label sentiment classification. This study discusses the prominence and effects of sentiment classification in sentiment evaluation and a lot of further research needs to be done for productive results

    Fuzzy Logic

    Get PDF
    Fuzzy Logic is becoming an essential method of solving problems in all domains. It gives tremendous impact on the design of autonomous intelligent systems. The purpose of this book is to introduce Hybrid Algorithms, Techniques, and Implementations of Fuzzy Logic. The book consists of thirteen chapters highlighting models and principles of fuzzy logic and issues on its techniques and implementations. The intended readers of this book are engineers, researchers, and graduate students interested in fuzzy logic systems

    Using Artificial Neural Networks to Determine Ontologies Most Relevant to Scientific Texts

    Full text link
    This paper provides an insight into the possibility of how to find ontologies most relevant to scientific texts using artificial neural networks. The basic idea of the presented approach is to select a representative paragraph from a source text file, embed it to a vector space by a pre-trained fine-tuned transformer, and classify the embedded vector according to its relevance to a target ontology. We have considered different classifiers to categorize the output from the transformer, in particular random forest, support vector machine, multilayer perceptron, k-nearest neighbors, and Gaussian process classifiers. Their suitability has been evaluated in a use case with ontologies and scientific texts concerning catalysis research. From results we can say the worst results have random forest. The best results in this task brought support vector machine classifier

    Machine Learning-assisted Bayesian Inference for Jamming Detection in 5G NR

    Full text link
    The increased flexibility and density of spectrum access in 5G NR have made jamming detection a critical research area. To detect coexisting jamming and subtle interference that can affect legitimate communications performance, we introduce machine learning (ML)-assisted Bayesian Inference for jamming detection methodologies. Our methodology leverages cross-layer critical signaling data collected on a 5G NR Non-Standalone (NSA) testbed via supervised learning models, and are further assessed, calibrated, and revealed using Bayesian Network Model (BNM)-based inference. The models can operate on both instantaneous and sequential time-series data samples, achieving an Area under Curve (AUC) in the range of 0.947 to 1 for instantaneous models and between 0.933 to 1 for sequential models including the echo state network (ESN) from the reservoir computing (RC) family, for jamming scenarios spanning multiple frequency bands and power levels. Our approach not only serves as a validation method and a resilience enhancement tool for ML-based jamming detection, but also enables root cause identification for any observed performance degradation. Our proof-of-concept is successful in addressing 72.2\% of the erroneous predictions in sequential models caused by insufficient data samples collected in the observation period, demonstrating its applicability in 5G NR and Beyond-5G (B5G) network infrastructure and user devices

    A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends

    Get PDF
    Computer vision (CV) is a big and important field in artificial intelligence covering a wide range of applications. Image analysis is a major task in CV aiming to extract, analyse and understand the visual content of images. However, imagerelated tasks are very challenging due to many factors, e.g., high variations across images, high dimensionality, domain expertise requirement, and image distortions. Evolutionary computation (EC) approaches have been widely used for image analysis with significant achievement. However, there is no comprehensive survey of existing EC approaches to image analysis. To fill this gap, this paper provides a comprehensive survey covering all essential EC approaches to important image analysis tasks including edge detection, image segmentation, image feature analysis, image classification, object detection, and others. This survey aims to provide a better understanding of evolutionary computer vision (ECV) by discussing the contributions of different approaches and exploring how and why EC is used for CV and image analysis. The applications, challenges, issues, and trends associated to this research field are also discussed and summarised to provide further guidelines and opportunities for future research
    corecore