13 research outputs found

    Parameters Optimization of Deep Learning Models using Particle Swarm Optimization

    Full text link
    Deep learning has been successfully applied in several fields such as machine translation, manufacturing, and pattern recognition. However, successful application of deep learning depends upon appropriately setting its parameters to achieve high quality results. The number of hidden layers and the number of neurons in each layer of a deep machine learning network are two key parameters, which have main influence on the performance of the algorithm. Manual parameter setting and grid search approaches somewhat ease the users tasks in setting these important parameters. Nonetheless, these two techniques can be very time consuming. In this paper, we show that the Particle swarm optimization (PSO) technique holds great potential to optimize parameter settings and thus saves valuable computational resources during the tuning process of deep learning models. Specifically, we use a dataset collected from a Wi-Fi campus network to train deep learning models to predict the number of occupants and their locations. Our preliminary experiments indicate that PSO provides an efficient approach for tuning the optimal number of hidden layers and the number of neurons in each layer of the deep learning algorithm when compared to the grid search method. Our experiments illustrate that the exploration process of the landscape of configurations to find the optimal parameters is decreased by 77%-85%. In fact, the PSO yields even better accuracy results

    Kecerdasan Buatan dalam Teknologi Kedokteran: Survey Paper

    Get PDF
    Dalam makalah ini, akan diberikan gambaran mengenai penerapan kecerdasan buatan dalam bidang medis, khususnya untuk pembuatan keputusan serta pengklasifikasian dalam ilmu diagnostik berdasarkan gambar biomedis. Beberapa teknologi kecerdasan buatan (AI) terbukti mampu melakukan optimasi klasifikasi gambar biomedis. Studi ini mengumpulkan studi representatif yang menunjukan bagaimana AI digunakan untuk memecahkan masalah pada ilmu diagnostik. Ini juga mengakui metode kecerdasan buatan yang sering digunakan dalam memecahkan masalah pada ilmu diagnostik, seperti metode jaringan syaraf tiruan, support vector machine, pohon keputusan, serta metode particle swarm optimization. Masalah-masalah dalam ilmu diagnostik yang dapat terpecahkan menggunakan metode tersebut diantaranya yaitu analisis tumor otak MRI dan kanker payudara. Berdasarkan hasil survei yang penulis lakukan, untuk metode yang paling efektif dan efisien dalam melakukan diagnosis pada bidang medis adalah metode CNN hanya saja metode CNN membutuhkan data yang cukup besar untuk melakukan klasifikasi

    Study of Different Deep Learning Approach with Explainable AI for Screening Patients with COVID-19 Symptoms: Using CT Scan and Chest X-ray Image Dataset

    Full text link
    The outbreak of COVID-19 disease caused more than 100,000 deaths so far in the USA alone. It is necessary to conduct an initial screening of patients with the symptoms of COVID-19 disease to control the spread of the disease. However, it is becoming laborious to conduct the tests with the available testing kits due to the growing number of patients. Some studies proposed CT scan or chest X-ray images as an alternative solution. Therefore, it is essential to use every available resource, instead of either a CT scan or chest X-ray to conduct a large number of tests simultaneously. As a result, this study aims to develop a deep learning-based model that can detect COVID-19 patients with better accuracy both on CT scan and chest X-ray image dataset. In this work, eight different deep learning approaches such as VGG16, InceptionResNetV2, ResNet50, DenseNet201, VGG19, MobilenetV2, NasNetMobile, and ResNet15V2 have been tested on two dataset-one dataset includes 400 CT scan images, and another dataset includes 400 chest X-ray images studied. Besides, Local Interpretable Model-agnostic Explanations (LIME) is used to explain the model's interpretability. Using LIME, test results demonstrate that it is conceivable to interpret top features that should have worked to build a trust AI framework to distinguish between patients with COVID-19 symptoms with other patients.Comment: This is a work in progress, it should not be relied upon without context to guide clinical practice or health-related behavior and should not be reported in news media as established information without consulting multiple experts in the fiel

    Real-Time Induction Motor Health Index Prediction in A Petrochemical Plant using Machine Learning

    Get PDF
    This paper presents real-time health prediction of induction motors (IMs) utilised in a petrochemical plant through the application of intelligent sensors and machine learning (ML) models. At present, maintenance engineers of the company implement time-based and condition-based maintenance techniques in periodically examining and diagnosing the health of IMs which results in sporadic breakdowns of IMs. Such breakdowns sometimes force the entire production process to stop for emergency maintenance resulting in a huge loss in the company’s revenue. Hence, top management decides to switch the operational practice to real-time predictive maintenance instead. Intelligent sensors are installed on IMs to collect necessary information related to their working statuses. ML exploits the real-time information received from intelligent sensors to flag abnormalities of mechanical or electrical components of IMs before potential failures are reached. Four ML models are investigated to evaluate which one is the best, i.e. Artificial Neural Network (ANN), Particle Swarm Optimization (PSO), Gradient Boosting Tree (GBT) and Random Forest (RF). Standard performance metrics are used to compare the relative effectiveness among different ML models including Precision, Recall, Accuracy, F1-score, and AUC-ROC curve. The results reveal that PSO not only obtains the highest average weighted Accuracy but also can differentiate the statuses (Class 0 – Class 3) of the IM more correctly than other counterpart models

    Trust-Based Cloud Machine Learning Model Selection For Industrial IoT and Smart City Services

    Get PDF
    With Machine Learning (ML) services now used in a number of mission-critical human-facing domains, ensuring the integrity and trustworthiness of ML models becomes all-important. In this work, we consider the paradigm where cloud service providers collect big data from resource-constrained devices for building ML-based prediction models that are then sent back to be run locally on the intermittently-connected resource-constrained devices. Our proposed solution comprises an intelligent polynomial-time heuristic that maximizes the level of trust of ML models by selecting and switching between a subset of the ML models from a superset of models in order to maximize the trustworthiness while respecting the given reconfiguration budget/rate and reducing the cloud communication overhead. We evaluate the performance of our proposed heuristic using two case studies. First, we consider Industrial IoT (IIoT) services, and as a proxy for this setting, we use the turbofan engine degradation simulation dataset to predict the remaining useful life of an engine. Our results in this setting show that the trust level of the selected models is 0.49% to 3.17% less compared to the results obtained using Integer Linear Programming (ILP). Second, we consider Smart Cities services, and as a proxy of this setting, we use an experimental transportation dataset to predict the number of cars. Our results show that the selected model's trust level is 0.7% to 2.53% less compared to the results obtained using ILP. We also show that our proposed heuristic achieves an optimal competitive ratio in a polynomial-time approximation scheme for the problem

    Deep learning: parameter optimization using proposed novel hybrid bees Bayesian convolutional neural network

    Get PDF
    Deep Learning (DL) is a type of machine learning used to model big data to extract complex relationship as it has the advantage of automatic feature extraction. This paper presents a review on DL showing all its network topologies along with their advantages, limitations, and applications. The most popular Deep Neural Network (DNN) is called a Convolutional Neural Network (CNN), the review found that the most important issue is designing better CNN topology, which needs to be addressed to improve CNN performance further. This paper addresses this problem by proposing a novel nature inspired hybrid algorithm that combines the Bees Algorithm (BA), which is known to mimic the behavior of honey bees, with Bayesian Optimization (BO) in order to increase the overall performance of CNN, which is referred to as BA-BO-CNN. Applying the hybrid algorithm on Cifar10DataDir benchmark image data yielded an increase in the validation accuracy from 80.72% to 82.22%, while applying it on digits datasets showed the same accuracy as the existing original CNN and BO-CNN, but with an improvement in the computational time by 3 min and 12 s reduction, and finally applying it on concrete cracks images produced almost similar results to existing algorithms

    Otimização de hiperparâmetros em algoritmos de arvore de decisão utilizando computação evolutiva

    Get PDF
    Some algorithms in machine learning are parameterizable, they allow the configuration of parameters in order to increase the performance in some tasks. In most cases, these parameters are empirically found by the developer. Another approach is to use some optimization technique to find an optimized set of parameters. The aim of this project is the application of evolutionary algorithms, Genetic Algorithm (GA), Fluid Genetic Algorithm (FGA) and Genetic Algorithm using Theory of Chaos (GATC) to optimize the search for hyperparameters in decision tree algorithms. This work presents some satisfactory results within the data set tested, where the Classification and Regression Trees (CART) algorithm was used as a classifier algorithm for the tests. In these, the decision trees generated from the default values of the hyperparameters are compared with those optimized by the proposed approach. We has tried to optimize the accuracy and final size of the generated tree, which were successfully optimized by the proposed algorithms.Alguns algoritmos em aprendizado de máquina são parametrizáveis, ou seja, permitem a configuração de parâmetros de maneira a aumentar o desempenho na tarefa utilizada. Na maioria dos casos, estes parâmetros são encontrados empiricamente pelo desenvolvedor. Outra abordagem é utilizar alguma técnica de otimização para encontrar um conjunto otimizado de parâmetros. Este projeto tem por objetivo a aplicação dos algoritmos evolutivos, Algoritmo Genético (AG), Fluid Genetic Algorithm (FGA) e Genetic Algorithm using Theory of Chaos (GATC) para otimizar a busca de hiperparâmetros em algoritmos de ´arvores de decisão. Este trabalho apresenta alguns resultados satisfatórios dentro do conjunto de dados testados, onde o algoritmo Classification and. Regressivo Trees (CART) foi utilizado como algoritmo classificador para os testes. Nestes, as arvores de decisão geradas a partir dos valores padrão dos hiperparâmetros são comparados com os otimizados pela abordagem proposta. Buscou-se otimizar a acurácia e o tamanho final da ´arvore gerada, o que foram otimizadas com sucesso pelos algoritmos propostos
    corecore