69 research outputs found

    Robust adaptive synchronization of a class of uncertain chaotic systems with unknown time-delay

    Get PDF
    The pavement is a complex structure that is influenced by various environmental and loading conditions. The regular assessment of pavement performance is essential for road network maintenance. International roughness index (IRI) and pavement condition index (PCI) are well-known indices used for smoothness and surface condition assessment, respectively. Machine learning techniques have recently made significant advancements in pavement engineering. This paper presents a novel roughness-distress study using random forest (RF). After determining the PCI and IRI values for the sample units, the PCI prediction process is advanced using RF and random forest trained with a genetic algorithm (RF-GA). The models are validated using correlation coefficient (CC), scatter index (SI), and Willmott’s index of agreement (WI) criteria. For the RF method, the values of the three parameters mentioned were −0.177, 0.296, and 0.281, respectively, whereas in the RF-GA method, −0.031, 0.238, and 0.297 values were obtained for these parameters. This paper aims to fulfill the literature’s identified gaps and help pavement engineers overcome the challenges with the conventional pavement maintenance systems

    Optimal type-3 fuzzy system for solving singular multi-pantograph equations

    Get PDF
    In this study a new machine learning technique is presented to solve singular multi-pantograph differential equations (SMDEs). A new optimized type-3 fuzzy logic system (T3-FLS) by unscented Kalman filter (UKF) is proposed for solution estimation. The convergence and stability of presented algorithm are ensured by the suggested Lyapunov analysis. By two SMDEs the effectiveness and applicability of the suggested method is demonstrated. The statistical analysis show that the suggested method results in accurate and robust performance and the estimated solution is well converged to the exact solution. The proposed algorithm is simple and can be applied on various SMDEs with variable coefficients

    Modeling of carbon dioxide solubility in ionic liquids based on group method of data handling

    Get PDF
    Due to industrial development, the volume of carbon dioxide (CO2) is rapidly increasing.. Several techniques have been used to eliminate CO2 from the output gas mixtures. One of these methods is CO2 capturing by ionic liquids (ILs). Computational models for estimating the CO2 solubility in ILS is of utmost importance. In this research, a white box model in the form of a mathematical correlation using the largest data bank in literature is presented by the group method of data handling (GMDH). This research investigates the application of GMDH intelligent method as a powerful computational approach for predicting CO2 solubility in different ionic liquids with temperature lower and upper than 324 K. In this regard, 4726 data points including the solubility of CO2 in 60 ILs were used for model development Moreover, seven different ionic liquids were selected to perform the external test. To evaluate the validity and efficiency of the suggested model, regression analysis was implemented on the actual and estimated target values. As a result, a proper fit between the experimental and predicted data was obtained and presented by various figures and statistical parameters. It is also worth noting that the predicted negative values in the proposed models are considered zero. Also, the results of the established correlation were compared to other proposed models exist in the literature of ionic liquids. The terminal form of the models suggested by GMDH approach and obtained based on temperature are two simple mathematical correlations by exerting input parameters of temperature (T), pressure (P), critical temperature (Tc ), critical pressure (Pc ) and, acentric factor (ω) which does not suffer from the black box property of other neural network algorithms. The model suggested in this work, would be a promising one which can act as an efficient predictor for CO2 solubility estimation in ILs and is capable of being used in different industries

    Social capital contributions to food security: A comprehensive literature review

    Get PDF
    Social capital creates a synergy that benefits all members of a community. This review examines how social capital contributes to the food security of communities. A systematic literature review, based on Prisma, is designed to provide a state of the art review on capacity social capital in this realm. The output of this method led to finding 39 related articles. Studying these articles illustrates that social capital improves food security through two mechanisms of knowledge sharing and product sharing (i.e., sharing food products). It reveals that social capital through improving the food security pillars (i.e., food availability, food accessibility, food utilization, and food system stability) affects food security. In other words, the interaction among the community members results in sharing food products and information among community members, which facilitates food availability and access to food. There are many shreds of evidence in the literature that sharing food and food products among the community member decreases household food security and provides healthy nutrition to vulnerable families, and improves the food utilization pillar of food security. It is also disclosed that belonging to the social networks increases the community members’ resilience and decreases the community’s vulnerability that subsequently strengthens the stability of a food system. This study contributes to the common literature on food security and social capital by providing a conceptual model based on the literature. In addition to researchers, policymakers can use this study’s findings to provide solutions to address food insecurity problems

    Observer–Based Control for a New Stochastic Maximum Power Point tracking for Photovoltaic Systems With Networked Control System

    Get PDF
    This study discusses the new stochastic maximum power point tracking (MPPT) control approach towards the photovoltaic cells (PCs). PC generator is isolated from the grid, resulting in a direct current (DC) microgrid that can provide changing loads. In the course of the nonlinear systems through the time-varying delays, we proposed a Networked Control Systems (NCSs) beneath an event-triggered approach basically in the fuzzy system. In this scenario, we look at how random, variable loads impact the PC generator’s stability and efficiency. The basic premise of this article is to load changes and the value matching to a Markov chain. PC generators are complicated nonlinear systems that pose a modeling problem. Transforming this nonlinear PC generator model into the Takagi–Sugeno (T–S) fuzzy model is another option. Takagi–Sugeno (T–S) fuzzy model is presented in a unified framework, for which 1) the fuzzy observer–based on this premise variables can be used for approximately in the infinite states to the present system, 2) the fuzzy observer–based controller can be created using this same premises be the observer, and 3) to reduce the impact of transmission burden, an event-triggered method can be investigated. Simulating in the PC generator model for the realtime climate data obtained in China demonstrates the importance of our method. In addition, by using a new Lyapunov–Krasovskii functional (LKF) for combining to the allowed weighting matrices incorporating mode-dependent integral terms, the developed model can be stochastically stable and achieves the required performances. Based on the T-P transformation, a new depiction of the nonlinear system is derived in two separate steps in which an adequate controller input is guaranteed in the first step and an adequate vertex polytope is ensured in the second step. To present the potential of our proposed method, we simulate it for PC generators

    Recurrent neural network and reinforcement learning model for COVID-19 prediction.

    Get PDF
    Detection and prediction of the novel Coronavirus present new challenges for the medical research community due to its widespread across the globe. Methods driven by Artificial Intelligence can help predict specific parameters, hazards, and outcomes of such a pandemic. Recently, deep learning-based approaches have proven a novel opportunity to determine various difficulties in prediction. In this work, two learning algorithms, namely deep learning and reinforcement learning, were developed to forecast COVID-19. This article constructs a model using Recurrent Neural Networks (RNN), particularly the Modified Long Short-Term Memory (MLSTM) model, to forecast the count of newly affected individuals, losses, and cures in the following few days. This study also suggests deep learning reinforcement to optimize COVID-19's predictive outcome based on symptoms. Real-world data was utilized to analyze the success of the suggested system. The findings show that the established approach promises prognosticating outcomes concerning the current COVID-19 pandemic and outperformed the Long Short-Term Memory (LSTM) model and the Machine Learning model, Logistic Regresion (LR) in terms of error rate

    DDSLA-RPL: Dynamic Decision System Based on Learning Automata in the RPL Protocol for Achieving QoS

    Get PDF
    The internet of things is a worldwide technological development in communications. Low Power and Lossy Networks (LLN) are a fundamental part of the internet of things with numerous monitoring and controlling applications. This network has many challenges as well, due to limited hardware and communication resources, which causes problems in applications such as routing, connections, data transfer, and aggregation between nodes. The IETF group has provided a routing model for LLN networks, which expands IPv6 protocol based on Routing Protocol (RPL). The pro-posed decision system DDSLA-RPL creates a list of limited k member optimal parents based on qualitatively effective parameters such as hop, link quality, SNR rate, and ETX energy consumption, by informing child nodes of their connection link to available parents. In the routing section, a decision system approach based on learning automata has been proposed to dynamically determine and update the weight of influential parameters in routing. The effective parameters in the routing phase of DDSLA-RPL include battery depletion index, connection delay, and node queuing and throughput. The results of the simulation show that the proposed method outperforms other methods by about 30, 17, 20, 18, and 24 percent in mean longevity and energy efficiency, graph sustainability, operational power and latency, packet delivery rate test, and finally number of control messages test, respectively

    Time series-based groundwater level forecasting using gated recurrent unit deep neural networks

    Get PDF
    In this research, the mean monthly groundwater level with a range of 3.78 m in Qoşaçay plain, Iran, is forecast. Regarding three different layers of gated recurrent unit (GRU) structures and a hybrid of variational mode decomposition with gated recurrent unit (VMD-GRU), deep learning-based neural network models are developed. As the base model for performance comparison, the general single-long short-term memory-layer network model is developed. In all models, the module of sequence-to-one is used because of the lack of meteorological variables recorded in the study area. For modeling, 216 monthly datasets of the mean monthly water table depth of 33 different monitoring piezometers in the period April 2002–March 2020 are utilized. To boost the performance of the models and reduce the overfitting problem, an algorithm tuning process using different types of hyperparameter accompanied by a trial-and-error procedure is applied. Based on performance evaluation metrics, the total learnable parameters value and especially the model grading process, the new double-GRU model coupled with multiplication layer (×) (GRU2× model) is chosen as the best model. Under the optimal hyperparameters, the GRU2× model results in an R 2 of 0.86, a root mean square error (RMSE) of 0.18 m, a corrected Akaike’s information criterion (AICc) of −280.75, a running time for model training of 87 s and a total grade (TG) of 6.21 in the validation stage; and the hybrid VMD-GRU model yields an RMSE of 0.16 m, an R 2 of 0.92, an AICc of −310.52, a running time of 185 s and a TG of 3.34. © 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group

    Optimization of performance and emission of compression ignition engine fueled with propylene glycol and biodiesel–diesel blends using artificialintelligence method of ANN-GA-RSM

    Get PDF
    The present study proposes the hybrid machine learning algorithm of artificial neural network-genetic algorithm-response surface methodology (ANN-GA-RSM) to modelthe performance and the emissionsof a single cylinder diesel engine fueled by diesel and propylene glycol additive. The evaluations areperformed using the correlation coefficient (CC), and the root mean square error (RMSE) values. The best model for prediction of the dependent variables is reported ANN-GA with the RMSE values of 0.0398, 0.0368, 0.0529, 0.0354, 0.0509 and 0.0409 and CC 0.988, 0.987, 0.977, 0.994, 0.984, 0.990, respectively for brake specific fuel consumption (BSFC), brake thermal efficiency (BTE), CO, CO2, NOx and SO2. The proposed hybrid model reduces BSFC, NOx, and CO by −30.82%, 21.32%, and 11.32%, respectively. The model also increases the engine efficiency and CO2 emission by 17.29% and 31.05%, respectively, compared to a single RSM in the optimized level of independent variables (69% of biodiesel's oxygen content and 32% of the oxygen content of propylene glycol)

    Enhancing Data Security for Cloud Computing Applications through Distributed Blockchain-based SDN Architecture in IoT Networks

    Full text link
    Blockchain (BC) and Software Defined Networking (SDN) are some of the most prominent emerging technologies in recent research. These technologies provide security, integrity, as well as confidentiality in their respective applications. Cloud computing has also been a popular comprehensive technology for several years. Confidential information is often shared with the cloud infrastructure to give customers access to remote resources, such as computation and storage operations. However, cloud computing also presents substantial security threats, issues, and challenges. Therefore, to overcome these difficulties, we propose integrating Blockchain and SDN in the cloud computing platform. In this research, we introduce the architecture to better secure clouds. Moreover, we leverage a distributed Blockchain approach to convey security, confidentiality, privacy, integrity, adaptability, and scalability in the proposed architecture. BC provides a distributed or decentralized and efficient environment for users. Also, we present an SDN approach to improving the reliability, stability, and load balancing capabilities of the cloud infrastructure. Finally, we provide an experimental evaluation of the performance of our SDN and BC-based implementation using different parameters, also monitoring some attacks in the system and proving its efficacy.Comment: 12 Pages 16 Figures 3 Table
    corecore