80 research outputs found

    A New Modified Binary Differential Evolution Algorithm and its Applications

    Get PDF
    This paper proposes a novel discrete version of Differential Evolution (NBDE) algorithm to solve combinatorial optimization problems with binary variables. A new binary mutation rule is introduced derived from the table of the basic DE mutation strategy and the value of scaling factor F is 1. The eight different combinations of the three randomly selected individuals using binary encoding are deduced. The developed mutation operator enables NBDE to explore and exploit the search space efficiently and effectively which are verified in applications to discrete optimization problems. Numerical experiments and comparisons on One-Max problem and Knapsack problem with two different sizes demonstrate that NBDE outperforms other existing algorithms in terms of final solution quality, search process efficiency and robustness

    A Generalized National Planning Approach for Admission Capacity in Higher Education: A Nonlinear Integer Goal Programming Model with a Novel Differential Evolution Algorithm

    Get PDF
    This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness

    MTEDS: Multivariant Time Series-Based Encoder-Decoder System for Anomaly Detection

    Get PDF
    Intrusion detection systems examine the computer or network for potential security vulnerabilities. Time series data is real-valued. The nature of the data influences the type of anomaly detection. As a result, network anomalies are operations that deviate from the norm. These anomalies can cause a wide range of device malfunctions, overloads, and network intrusions. As a result of this, the network\u27s normal operation and services will be disrupted. The paper proposes a new multi-variant time series-based encoder-decoder system for dealing with anomalies in time series data with multiple variables. As a result, to update network weights via backpropagation, a radical loss function is defined. Anomaly scores are used to evaluate performance. The anomaly score, according to the findings, is more stable and traceable, with fewer false positives and negatives. The proposed system\u27s efficiency is compared to three existing approaches: Multiscaling Convolutional Recurrent Encoder-Decoder, Autoregressive Moving Average, and Long Short Term Medium-Encoder-Decoder. The results show that the proposed technique has the highest precision of 1 for a noise level of 0.2. Thus, it demonstrates greater precision for noise factors of 0.25, 0.3, 0.35, and 0.4, and its effectiveness

    Interpretable Deep Learning for Discriminating Pneumonia from Lung Ultrasounds

    Get PDF
    Lung ultrasound images have shown great promise to be an operative point-of-care test for the diagnosis of COVID-19 because of the ease of procedure with negligible individual protection equipment, together with relaxed disinfection. Deep learning (DL) is a robust tool for modeling infection patterns from medical images; however, the existing COVID-19 detection models are complex and thereby are hard to deploy in frequently used mobile platforms in point-of-care testing. Moreover, most of the COVID-19 detection models in the existing literature on DL are implemented as a black box, hence, they are hard to be interpreted or trusted by the healthcare community. This paper presents a novel interpretable DL framework discriminating COVID-19 infection from other cases of pneumonia and normal cases using ultrasound data of patients. In the proposed framework, novel transformer modules are introduced to model the pathological information from ultrasound frames using an improved window-based multi-head self-attention layer. A convolutional patching module is introduced to transform input frames into latent space rather than partitioning input into patches. A weighted pooling module is presented to score the embeddings of the disease representations obtained from the transformer modules to attend to information that is most valuable for the screening decision. Experimental analysis of the public three-class lung ultrasound dataset (PCUS dataset) demonstrates the discriminative power (Accuracy: 93.4%, F1-score: 93.1%, AUC: 97.5%) of the proposed solution overcoming the competing approaches while maintaining low complexity. The proposed model obtained very promising results in comparison with the rival models. More importantly, it gives explainable outputs therefore, it can serve as a candidate tool for empowering the sustainable diagnosis of COVID-19-like diseases in smart healthcare

    An efficient algorithm for data parallelism based on stochastic optimization

    Get PDF
    Deep neural network models can achieve greater performance in numerous machine learning tasks by raising the depth of the model and the amount of training data samples. However, these essential procedures will proportionally raise the cost of training deep neural network models. Accelerating the training process of deep neural network models in a distributed computing environment has become the most often utilized strategy for developers in order to better cope with a huge quantity of training overhead. The current deep neural network model is the stochastic gradient descent (SGD) technique. It is one of the most widely used training techniques in network models, although it is prone to gradient obsolescence during parallelization, which impacts the overall convergence. The majority of present solutions are geared at high-performance nodes with minor performance changes. Few studies have taken into account the cluster environment in high-performance computing (HPC), where the performance of each node varies substantially. A dynamic batch size stochastic gradient descent approach based on performance-aware technology is suggested to address the aforesaid difficulties (DBS-SGD). By assessing the processing capacity of each node, this method dynamically allocates the minibatch of each node, guaranteeing that the update time of each iteration between nodes is essentially the same, lowering the average gradient of the node. The suggested approach may successfully solve the asynchronous update strategy’s gradient outdated problem. The Mnist and cifar10 are two widely used image classification benchmarks, that are employed as training data sets, and the approach is compared with the asynchronous stochastic gradient descent (ASGD) technique. The experimental findings demonstrate that the proposed algorithm has better performance as compared with existing algorithms

    Dark Web Data Classification Using Neural Network

    Get PDF
    There are several issues associated with Dark Web Structural Patterns mining (including many redundant and irrelevant information), which increases the numerous types of cybercrime like illegal trade, forums, terrorist activity, and illegal online shopping. Understanding online criminal behavior is challenging because the data is available in a vast amount. To require an approach for learning the criminal behavior to check the recent request for improving the labeled data as a user profiling, Dark Web Structural Patterns mining in the case of multidimensional data sets gives uncertain results. Uncertain classification results cause a problem of not being able to predict user behavior. Since data of multidimensional nature has feature mixes, it has an adverse influence on classification. The data associated with Dark Web inundation has restricted us from giving the appropriate solution according to the need. In the research design, a Fusion NN (Neural network)-S3VM for Criminal Network activity prediction model is proposed based on the neural network; NN- S3VM can improve the prediction

    Opportunities of IoT in Fog Computing for High Fault Tolerance and Sustainable Energy Optimization

    Get PDF
    Today, the importance of enhanced quality of service and energy optimization has promoted research into sensor applications such as pervasive health monitoring, distributed computing, etc. In general, the resulting sensor data are stored on the cloud server for future processing. For this purpose, recently, the use of fog computing from a real-world perspective has emerged, utilizing end-user nodes and neighboring edge devices to perform computation and communication. This paper aims to develop a quality-of-service-based energy optimization (QoS-EO) scheme for the wireless sensor environments deployed in fog computing. The fog nodes deployed in specific geographical areas cover the sensor activity performed in those areas. The logical situation of the entire system is informed by the fog nodes, as portrayed. The implemented techniques enable services in a fog-collaborated WSN environment. Thus, the proposed scheme performs quality-of-service placement and optimizes the network energy. The results show a maximum turnaround time of 8 ms, a minimum turnaround time of 1 ms, and an average turnaround time of 3 ms. The costs that were calculated indicate that as the number of iterations increases, the path cost value decreases, demonstrating the efficacy of the proposed technique. The CPU execution delay was reduced to a minimum of 0.06 s. In comparison, the proposed QoS-EO scheme has a lower network usage of 611,643.3 and a lower execution cost of 83,142.2. Thus, the results show the best cost estimation, reliability, and performance of data transfer in a short time, showing a high level of network availability, throughput, and performance guarantee

    NIPUNA: A Novel Optimizer Activation Function for Deep Neural Networks

    Get PDF
    In recent years, various deep neural networks with different learning paradigms have been widely employed in various applications, including medical diagnosis, image analysis, self-driving vehicles and others. The activation functions employed in deep neural networks have a huge impact on the training model and the reliability of the model. The Rectified Linear Unit (ReLU) has recently emerged as the most popular and extensively utilized activation function. ReLU has some flaws, such as the fact that it is only active when the units are positive during back-propagation and zero otherwise. This causes neurons to die (dying ReLU) and a shift in bias. However, unlike ReLU activation functions, Swish activation functions do not remain stable or move in a single direction. This research proposes a new activation function named NIPUNA for deep neural networks. We test this activation by training on customized convolutional neural networks (CCNN). On benchmark datasets (Fashion MNIST images of clothes, MNIST dataset of handwritten digits), the contributions are examined and compared to various activation functions. The proposed activation function can outperform traditional activation functions
    corecore