990 research outputs found

    Sustainable Collaboration: Federated Learning for Environmentally Conscious Forest Fire Classification in Green Internet of Things (IoT)

    Get PDF
    Forests are an invaluable natural resource, playing a crucial role in the regulation of both local and global climate patterns. Additionally, they offer a plethora of benefits such as medicinal plants, food, and non-timber forest products. However, with the growing global population, the demand for forest resources has escalated, leading to a decline in their abundance. The reduction in forest density has detrimental impacts on global temperatures and raises the likelihood of forest fires. To address these challenges, this paper introduces a Federated Learning framework empowered by the Internet of Things (IoT). The proposed framework integrates with an Intelligent system, leveraging mounted cameras strategically positioned in highly vulnerable areas susceptible to forest fires. This integration enables the timely detection and monitoring of forest fire occurrences and plays its part in avoiding major catastrophes. The proposed framework incorporates the Federated Stochastic Gradient Descent (FedSGD) technique to aggregate the global model in the cloud. The dataset employed in this study comprises two classes: fire and non-fire images. This dataset is distributed among five nodes, allowing each node to independently train the model on their respective devices. Following the local training, the learned parameters are shared with the cloud for aggregation, ensuring a collective and comprehensive global model. The effectiveness of the proposed framework is assessed by comparing its performance metrics with the recent work. The proposed algorithm achieved an accuracy of 99.27 % and stands out by leveraging the concept of collaborative learning. This approach distributes the workload among nodes, relieving the server from excessive burden. Each node is empowered to obtain the best possible model for classification, even if it possesses limited data. This collaborative learning paradigm enhances the overall efficiency and effectiveness of the classification process, ensuring optimal results in scenarios where data availability may be constrained

    Smart Gas Sensors: Materials, Technologies, Practical ‎Applications, and Use of Machine Learning – A Review

    Get PDF
    The electronic nose, popularly known as the E-nose, that combines gas sensor arrays (GSAs) with machine learning has gained a strong foothold in gas sensing technology. The E-nose designed to mimic the human olfactory system, is used for the detection and identification of various volatile compounds. The GSAs develop a unique signal fingerprint for each volatile compound to enable pattern recognition using machine learning algorithms. The inexpensive, portable and non-invasive characteristics of the E-nose system have rendered it indispensable within the gas-sensing arena. As a result, E-noses have been widely employed in several applications in the areas of the food industry, health management, disease diagnosis, water and air quality control, and toxic gas leakage detection. This paper reviews the various sensor fabrication technologies of GSAs and highlights the main operational framework of the E-nose system. The paper details vital signal pre-processing techniques of feature extraction, feature selection, in addition to machine learning algorithms such as SVM, kNN, ANN, and Random Forests for determining the type of gas and estimating its concentration in a competitive environment. The paper further explores the potential applications of E-noses for diagnosing diseases, monitoring air quality, assessing the quality of food samples and estimating concentrations of volatile organic compounds (VOCs) in air and in food samples. The review concludes with some challenges faced by E-nose, alternative ways to tackle them and proposes some recommendations as potential future work for further development and design enhancement of E-noses

    Emotion Detection in Arabic Text using Machine Learning Methods

    Get PDF
    Abstract Emotions are essential to any or all languages and are notoriously challenging to grasp While numerous studies discussing the recognition of emotion in English Arabic emotion recognition research remains in its early stages The textual data with embedded emotions has increased considerably with the Internet and social networking platforms This study aims to tackle the challenging problem of emotion detection in Arabic text Recent studies found that dialect diversity and morpho- logical complexity in the Arabic language with the limited access of annotated training datasets for Arabic emotions pose the foremost significant challenges to Arabic emotion detection Social media is becoming a more popular kind of communication where users can share their thoughts and express emotions like joy sadness anger surprise hate fear so on some range of subjects in ways they d not typically neutralize person Social media also present different challenges which include spelling mistakes new slang and incorrect use of grammar The previous few years have seen a giant increase in interest in text emotion detectio

    Exploiting molecular vulnerabilities in genetically defined lung cancer models

    Get PDF
    Lung cancer is the leading cause of cancer-related death worldwide, with approximately 1.8 million deaths in 2020. Based on histology, lung cancer is divided into non-small cell lung cancer (NSCLC) (85 %) and small cell lung cancer (SCLC) (15 %). The most common types of NSCLC are lung squamous cell carcinoma (LUSC), large-cell carcinoma (LCC), and lung adenocarcinoma (LUAD). LUAD, the largest subgroup of NSCLC, is characterized by genomic alterations in oncogenic driver genes such as KRAS or EGFR. Mutations in the kinase domain of EGFR result in aberrant signaling activation and subsequent cancer development. Tyrosine kinase inhibitors (TKIs) selectively target and inhibit mutant kinases, thereby killing oncogene-addicted cancer cells. The introduction of TKIs into clinical practice shifted NSCLC treatment from cytotoxic chemotherapy towards precision medicine, improving both survival and the quality of life during therapy. Patients with canonical EGFR mutations like the point-mutation L858R or exon 19 deletions mutations, which account for the majority of EGFR mutations, respond well to EGFR targeted TKIs. However, rare mutations like insertions in exon 20 insertions still represent challenging drug targets. C-helix–4-loop insertion mutations in exon 20 push the C-helix into the active, inward position without altering the binding site for TKIs. This leaves the binding site for TKIs in kinases with exon 20ins mutations highly similar to wild type (WT) EGFR. Thus, the challenge in the development of exon 20 inhibitors is the design of wild type sparing small molecules. Here, we analyzed a novel small molecule EGFR inhibitor (LDC0496) targeting an emerging cleft in exon 20-mutated EGFR to achieve selectivity over the wild type. In contrast to classical EGFR TKIs, LDC0496 reduces the cellular viability of EGFR exon 20 mutated cells but spares wild type EGFR. Targeted therapy inevitably results in the development of on- or off-target resistance. Drug induced resistance mutations require the constant development of novel drugs targeting the diverse landscape of resistance mechanisms. We detected BRAF mutations in EGFR-driven lung cancer patients as a resistance mechanism to EGFR inhibitors. Notably, we also detected co-occurrence of EGFR and BRAF mutations before treatment start. Combination treatment of EGFR and mitogen-activated protein kinase kinase (MEK) inhibition displayed activity in BRAF- and EGFR-mutated xenograft studies, therefore providing a treatment strategy to overcome BRAF mutation as a resistance mechanism. Compared to NSCLC, SCLC lacks druggable targets and the initial chemosensitive state rapidly turns into a chemoresistance state. SCLC is genetically defined by a biallelic loss of tumor suppressors RB1 and TP53 and alterations of MYC family members. The transcription factor MYC is a challenging target that cannot be directly targeted. Therefore, alternative strategies are needed, for example targeting its co-factors, such as the MYC-interacting zinc finger protein 1 (MIZ1). To study the complex interplay of Myc–Miz1 in SCLC, we developed a novel mouse model with a truncated Miz1, which is unable to stably bind chromatin (RPMM: Rb1fl/flTrp53fl/flMycLSL/LSLMIZ1∆POZfl/fl). Compared to Miz1 wild type the characterization of the novel mouse model revealed tumor-onset, localization, size and immune infiltration to be unaffected by the ablation of the Miz1-POZ domain, but mice with Miz1-∆POZ live longer, exhibit an increased number of apoptotic cells and are more sensitive towards chemotherapy. We found that truncated Miz1 alter SCLC tumorigenesis towards a less aggressive phenotype and prolongs the chemosensitive state. Our study highlights alternative strategies to define novel vulnerabilities and options to overcome chemoresistance

    A Robust Multilabel Method Integrating Rule-based Transparent Model, Soft Label Correlation Learning and Label Noise Resistance

    Full text link
    Model transparency, label correlation learning and the robust-ness to label noise are crucial for multilabel learning. However, few existing methods study these three characteristics simultaneously. To address this challenge, we propose the robust multilabel Takagi-Sugeno-Kang fuzzy system (R-MLTSK-FS) with three mechanisms. First, we design a soft label learning mechanism to reduce the effect of label noise by explicitly measuring the interactions between labels, which is also the basis of the other two mechanisms. Second, the rule-based TSK FS is used as the base model to efficiently model the inference relationship be-tween features and soft labels in a more transparent way than many existing multilabel models. Third, to further improve the performance of multilabel learning, we build a correlation enhancement learning mechanism based on the soft label space and the fuzzy feature space. Extensive experiments are conducted to demonstrate the superiority of the proposed method.Comment: This paper has been accepted by IEEE Transactions on Fuzzy System

    Multi-Network Feature Fusion Facial Emotion Recognition using Nonparametric Method with Augmentation

    Get PDF
    Facial expression emotion identification and prediction is one of the most difficult problems in computer science. Pre-processing and feature extraction are crucial components of the more conventional methods. For the purpose of emotion identification and prediction using 2D facial expressions, this study targets the Face Expression Recognition dataset and shows the real implementation or assessment of learning algorithms such as various CNNs. Due to its vast potential in areas like artificial intelligence, emotion detection from facial expressions has become an essential requirement. Many efforts have been done on the subject since it is both a challenging and fascinating challenge in computer vision. The focus of this study is on using a convolutional neural network supplemented with data to build a facial emotion recognition system. This method may use face images to identify seven fundamental emotions, including anger, contempt, fear, happiness, neutrality, sadness, and surprise. As well as improving upon the validation accuracy of current models, a convolutional neural network that takes use of data augmentation, feature fusion, and the NCA feature selection approach may assist solve some of their drawbacks. Researchers in this area are focused on improving computer predictions by creating methods to read and codify facial expressions. With deep learning's striking success, many architectures within the framework are being used to further the method's efficacy. We highlight the contributions dealt with, the architecture and databases used, and demonstrate the development by contrasting the offered approaches and the outcomes produced. The purpose of this study is to aid and direct future researchers in the subject by reviewing relevant recent studies and offering suggestions on how to further the field. An innovative feature-based transfer learning technique is created using the pre-trained networks MobileNetV2 and DenseNet-201. The suggested system's recognition rate is 75.31%, which is a significant improvement over the results of the prior feature fusion study

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Multi-label Classification Using Vector Generalized Additive Model via Cross-Validation

    Get PDF
    Multi-label classification is a unique challenge in machine learning designed for two targets with each containing one or multiple classes. This problem can be resolved using several methods, including the classification of the targets individually or simultaneously. However, most models cannot classify the target simultaneously, and this is not expected to happen in the modeling rule. This study was conducted to propose a novel solution in the form of a Vector Generalized Additive Model Using Cross-Validation (VGAMCV) to address these problems. The proposed method leverages the Vector Generalized Additive Model (VGAM), which is a semi-parametric model combining both parametric and non-parametric components as the underlying base model. Cross-validation was also applied to tune the parameters to optimize the performance of the method. Moreover, the methodology of VGAMCV was compared with a tree-based model, Random Forest, commonly used in multi-label classification to evaluate its effectiveness based on fourteen metric scores. The results showed positive outcomes as indicated by 0.703 average accuracy and 0.601 Area Under Curve (AUC) recorded, but these improvements were not statistically significant. Meanwhile, the method offered a viable alternative for multi-label classification tasks, and its introduction served as a contribution to the expanding repertoire of methods available for this purpose

    Algorithm selection using edge ML and case-based reasoning

    Get PDF
    In practical data mining, a wide range of classification algorithms is employed for prediction tasks. However, selecting the best algorithm poses a challenging task for machine learning practitioners and experts, primarily due to the inherent variability in the characteristics of classification problems, referred to as datasets, and the unpredictable performance of these algorithms. Dataset characteristics are quantified in terms of meta-features, while classifier performance is evaluated using various performance metrics. The assessment of classifiers through empirical methods across multiple classification datasets, while considering multiple performance metrics, presents a computationally expensive and time-consuming obstacle in the pursuit of selecting the optimal algorithm. Furthermore, the scarcity of sufficient training data, denoted by dimensions representing the number of datasets and the feature space described by meta-feature perspectives, adds further complexity to the process of algorithm selection using classical machine learning methods. This research paper presents an integrated framework called eML-CBR that combines edge edge-ML and case-based reasoning methodologies to accurately address the algorithm selection problem. It adapts a multi-level, multi-view case-based reasoning methodology, considering data from diverse feature dimensions and the algorithms from multiple performance aspects, that distributes computations to both cloud edges and centralized nodes. On the edge, the first-level reasoning employs machine learning methods to recommend a family of classification algorithms, while at the second level, it recommends a list of the top-k algorithms within that family. This list is further refined by an algorithm conflict resolver module. The eML-CBR framework offers a suite of contributions, including integrated algorithm selection, multi-view meta-feature extraction, innovative performance criteria, improved algorithm recommendation, data scarcity mitigation through incremental learning, and an open-source CBR module, reshaping research paradigms. The CBR module, trained on 100 datasets and tested with 52 datasets using 9 decision tree algorithms, achieved an accuracy of 94% for correct classifier recommendations within the top k=3 algorithms, making it highly suitable for practical classification applications

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure
    • …
    corecore