2 research outputs found

    Optimizing horizontal scalability in cloud computing using simulated annealing for Internet of Things

    No full text
    The Internet of Things (IoTs) is a technology that connects sensor devices to the Internet to enable smarter and more intelligent communication. Today, many industries are using various IoT devices to create smart and intelligent environments. However, the sudden increase in demand has created a major challenge for IoT connections, known as scalability. Scalability refers to increasing and expanding the number of internet-connected devices for a specific application. To address this issue, we propose simulated annealing-based horizontal scaling to achieve faster and more efficient scaling to accommodate IoT devices. We explore different horizontal scaling methods and propose a Markov chain process to model the scaling. We then use simulated annealing to optimize the scaling visualized by the Markov chain process. Our goal is to focus on the flexible nature of horizontal scalability for adding various IoT devices and resources as needed. We have compared our proposed horizontal scalability optimization with vertical scalability, which has a built-in feature of elasticity. We have evaluated several parameters, such as cost, service rate, and transfer rate, and found that our proposal outperforms existing methods

    A Feature Extraction Using Probabilistic Neural Network and BTFSC-Net Model with Deep Learning for Brain Tumor Classification

    No full text
    Background and Objectives: Brain Tumor Fusion-based Segments and Classification-Non-enhancing tumor (BTFSC-Net) is a hybrid system for classifying brain tumors that combine medical image fusion, segmentation, feature extraction, and classification procedures. Materials and Methods: to reduce noise from medical images, the hybrid probabilistic wiener filter (HPWF) is first applied as a preprocessing step. Then, to combine robust edge analysis (REA) properties in magnetic resonance imaging (MRI) and computed tomography (CT) medical images, a fusion network based on deep learning convolutional neural networks (DLCNN) is developed. Here, the brain images’ slopes and borders are detected using REA. To separate the sick region from the color image, adaptive fuzzy c-means integrated k-means (HFCMIK) clustering is then implemented. To extract hybrid features from the fused image, low-level features based on the redundant discrete wavelet transform (RDWT), empirical color features, and texture characteristics based on the gray-level cooccurrence matrix (GLCM) are also used. Finally, to distinguish between benign and malignant tumors, a deep learning probabilistic neural network (DLPNN) is deployed. Results: according to the findings, the suggested BTFSC-Net model performed better than more traditional preprocessing, fusion, segmentation, and classification techniques. Additionally, 99.21% segmentation accuracy and 99.46% classification accuracy were reached using the proposed BTFSC-Net model. Conclusions: earlier approaches have not performed as well as our presented method for image fusion, segmentation, feature extraction, classification operations, and brain tumor classification. These results illustrate that the designed approach performed more effectively in terms of enhanced quantitative evaluation with better accuracy as well as visual performance
    corecore