95 research outputs found
Evaluation of Sustainable Waste Valorization using TreeSoft Set with Neutrosophic Sets
This study proposed a neutrosophic set framework with TreeSoft Set for sustainable waste valorization selection. The neutrosophic set is used to overcome uncertainty and vague information in the evaluation process. The neutrosophic set has three membership degrees: truth, indeterminacy, and falsity. The multicriteria decision-making (MCDM) methodology deals with various criteria to evaluate waste valorization. The VIKOR method is an MCDM method used to rank the alternatives. The numerical example was created with 12 criteria and 10 alternatives. Three decision-makers and experts are invited to evaluate the requirements and options. We used the bipolar neutrosophic numbers to replace the opinions of experts
A TreeSoft Set with Interval Valued Neutrosophic Set in the era of Industry 4.0
The introduction of Industry 4.0 has brought about a significant shift in the manufacturing and supply chain management sectors, requiring supplier selection procedures to be adjusted to this rapidly changing technical environment. This study aims to improve supplier selection in Industry 4.0. This selection contains various criteria, so multi-criteria decision-making (MCDM) is used to deal with these criteria. The interval-valued neutrosophic sets (IVNSs) are used to deal with uncertainty in the evaluation process. The IVNSs are integrated with the TreeSoft Set. The TOPSIS method is an MCDM method used to rank the alternatives. The results show that the economic criterion is the most important, and supplier 7 is the best
An efficient algorithm for data parallelism based on stochastic optimization
Deep neural network models can achieve greater performance in numerous machine learning tasks by raising the depth of the model and the amount of training data samples. However, these essential procedures will proportionally raise the cost of training deep neural network models. Accelerating the training process of deep neural network models in a distributed computing environment has become the most often utilized strategy for developers in order to better cope with a huge quantity of training overhead. The current deep neural network model is the stochastic gradient descent (SGD) technique. It is one of the most widely used training techniques in network models, although it is prone to gradient obsolescence during parallelization, which impacts the overall convergence. The majority of present solutions are geared at high-performance nodes with minor performance changes. Few studies have taken into account the cluster environment in high-performance computing (HPC), where the performance of each node varies substantially. A dynamic batch size stochastic gradient descent approach based on performance-aware technology is suggested to address the aforesaid difficulties (DBS-SGD). By assessing the processing capacity of each node, this method dynamically allocates the minibatch of each node, guaranteeing that the update time of each iteration between nodes is essentially the same, lowering the average gradient of the node. The suggested approach may successfully solve the asynchronous update strategy’s gradient outdated problem. The Mnist and cifar10 are two widely used image classification benchmarks, that are employed as training data sets, and the approach is compared with the asynchronous stochastic gradient descent (ASGD) technique. The experimental findings demonstrate that the proposed algorithm has better performance as compared with existing algorithms
Evolutionary framework with reinforcement learning-based mutation adaptation
Although several multi-operator and multi-method approaches for solving optimization problems have been proposed, their performances are not consistent for a wide range of optimization problems. Also, the task of ensuring the appropriate selection of algorithms and operators may be inefficient since their designs are undertaken mainly through trial and error. This research proposes an improved optimization framework that uses the benefits of multiple algorithms, namely, a multi-operator differential evolution algorithm and a co-variance matrix adaptation evolution strategy. In the former, reinforcement learning is used to automatically choose the best differential evolution operator. To judge the performance of the proposed framework, three benchmark sets of bound-constrained optimization problems (73 problems) with 10, 30 and 50 dimensions are solved. Further, the proposed algorithm has been tested by solving optimization problems with 100 dimensions taken from CEC2014 and CEC2017 benchmark problems. A real-world application data set has also been solved. Several experiments are designed to analyze the effects of different components of the proposed framework, with the best variant compared with a number of state-of-the-art algorithms. The experimental results show that the proposed algorithm is able to outperform all the others considered.</p
Task Scheduling Approach in Cloud Computing Environment Using Hybrid Differential Evolution
Task scheduling is one of the most significant challenges in the cloud computing environment and has attracted the attention of various researchers over the last decades, in order to achieve cost-effective execution and improve resource utilization. The challenge of task scheduling is categorized as a nondeterministic polynomial time (NP)-hard problem, which cannot be tackled with the classical methods, due to their inability to find a near-optimal solution within a reasonable time. Therefore, metaheuristic algorithms have recently been employed to overcome this problem, but these algorithms still suffer from falling into a local minima and from a low convergence speed. Therefore, in this study, a new task scheduler, known as hybrid differential evolution (HDE), is presented as a solution to the challenge of task scheduling in the cloud computing environment. This scheduler is based on two proposed enhancements to the traditional differential evolution. The first improvement is based on improving the scaling factor, to include numerical values generated dynamically and based on the current iteration, in order to improve both the exploration and exploitation operators; the second improvement is intended to improve the exploitation operator of the classical DE, in order to achieve better results in fewer iterations. Multiple tests utilizing randomly generated datasets and the CloudSim simulator were conducted, to demonstrate the efficacy of HDE. In addition, HDE was compared to a variety of heuristic and metaheuristic algorithms, including the slime mold algorithm (SMA), equilibrium optimizer (EO), sine cosine algorithm (SCA), whale optimization algorithm (WOA), grey wolf optimizer (GWO), classical DE, first come first served (FCFS), round robin (RR) algorithm, and shortest job first (SJF) scheduler. During trials, makespan and total execution time values were acquired for various task sizes, ranging from 100 to 3000. Compared to the other metaheuristic and heuristic algorithms considered, the results of the studies indicated that HDE generated superior outcomes. Consequently, HDE was found to be the most efficient metaheuristic scheduling algorithm among the numerous methods researched
An Optimization Model for Appraising Intrusion-Detection Systems for Network Security Communications:Applications, Challenges, and Solutions
Cyber-attacks are getting increasingly complex, and as a result, the functional concerns of intrusion-detection systems (IDSs) are becoming increasingly difficult to resolve. The credibility of security services, such as privacy preservation, authenticity, and accessibility, may be jeopardized if breaches are not detected. Different organizations currently utilize a variety of tactics, strategies, and technology to protect the systems’ credibility in order to combat these dangers. Safeguarding approaches include establishing rules and procedures, developing user awareness, deploying firewall and verification systems, regulating system access, and forming computer-issue management groups. The effectiveness of intrusion-detection systems is not sufficiently recognized. IDS is used in businesses to examine possibly harmful tendencies occurring in technological environments. Determining an effective IDS is a complex task for organizations that require consideration of many key criteria and their sub-aspects. To deal with these multiple and interrelated criteria and their sub-aspects, a multi-criteria decision-making (MCMD) approach was applied. These criteria and their sub-aspects can also include some ambiguity and uncertainty, and thus they were treated using q-rung orthopair fuzzy sets (q-ROFS) and q-rung orthopair fuzzy numbers (q-ROFNs). Additionally, the problem of combining expert and specialist opinions was dealt with using the q-rung orthopair fuzzy weighted geometric (q-ROFWG). Initially, the entropy method was applied to assess the priorities of the key criteria and their sub-aspects. Then, the combined compromised solution (CoCoSo) method was applied to evaluate six IDSs according to their effectiveness and reliability. Afterward, comparative and sensitivity analyses were performed to confirm the stability, reliability, and performance of the proposed approach. The findings indicate that most of the IDSs appear to be systems with high potential. According to the results, Suricata is the best IDS that relies on multi-threading performance
- …