3,191 research outputs found

    Volumetric Techniques for Product Routing and Loading Optimisation in Industry 4.0: A Review

    Get PDF
    Industry 4.0 has become a crucial part in the majority of processes, components, and related modelling, as well as predictive tools that allow a more efficient, automated and sustainable approach to industry. The availability of large quantities of data, and the advances in IoT, AI, and data-driven frameworks, have led to an enhanced data gathering, assessment, and extraction of actionable information, resulting in a better decision-making process. Product picking and its subsequent packing is an important area, and has drawn increasing attention for the research community. However, depending of the context, some of the related approaches tend to be either highly mathematical, or applied to a specific context. This article aims to provide a survey on the main methods, techniques, and frameworks relevant to product packing and to highlight the main properties and features that should be further investigated to ensure a more efficient and optimised approach

    An Integrated Deep Learning Model with Genetic Algorithm (GA) for Optimal Syngas Production Using Dry Reforming of Methane (DRM)

    Get PDF
    The dry reforming of methane is a chemical process transforming two primary sources of greenhouse gases, i.e., carbon dioxide (CO2) and methane (CH4), into syngas, a versatile precursor in the industry, which has gained significant attention over the past decades. Nonetheless, commercial development of this eco-friendly process faces barriers such as catalyst deactivation and high energy demand. Artificial intelligence (AI), specifically deep learning, accelerates the development of this process by providing advanced analytics. However, deep learning requires substantial training samples and collecting data on a bench scale encounters cost and physical constraints. This study fills this research gap by employing a pretraining approach, which is invaluable for small datasets. It introduces a software sensor for regression (SSR) powered by deep learning to estimate the quality parameters of the process. Moreover, combining the SSR with a genetic algorithm offers a prescriptive analysis, suggesting optimal thermodynamic parameters to improve the process efficiency

    Sentiment review of coastal assessment using neural network and naïve Bayes

    Get PDF
    An assessment of a place will provide an overview for other people whether the place is feasible to be visited or not. Assessment of coastal places will provide a separate assessment for potential visitors in considering visitation. This article proposes a model using the neural network (NN) and naïve Bayes (NB) methods to classify sentiment toward coastal assessments. The proposed NN and NB models are optimized using information gain (IG) and feature weights, namely particle swarm optimization (PSO) and genetic algorithm (GA) which are carried out to increase the level of classification accuracy. Based on the experimental results, the best level of accuracy for the classification of coastal assessments is 87.11% and is named the NB IG+PSO model. The best model obtained is a model that can be used as a decision support for potential beach visitors in deciding to visit the place

    Integrated self-consistent macro-micro traffic flow modeling and calibration framework based on trajectory data

    Get PDF
    Calibrating microscopic car-following (CF) models is crucial in traffic flow theory as it allows for accurate reproduction and investigation of traffic behavior and phenomena. Typically, the calibration procedure is a complicated, non-convex optimization issue. When the traffic state is in equilibrium, the macroscopic flow model can be derived analytically from the corresponding CF model. In contrast to the microscopic CF model, calibrated based on trajectory data, the macroscopic representation of the fundamental diagram (FD) primarily adopts loop detector data for calibration. The different calibration approaches at the macro- and microscopic levels may lead to misaligned parameters with identical practical meanings in both macro- and micro-traffic models. This inconsistency arises from the difference between the parameter calibration processes used in macro- and microscopic traffic flow models. Hence, this study proposes an integrated multiresolution traffic flow modeling framework using the same trajectory data for parameter calibration based on the self-consistency concept. This framework incorporates multiple objective functions in the macro- and micro-dimensions. To expeditiously execute the proposed framework, an improved metaheuristic multi-objective optimization algorithm is presented that employs multiple enhancement strategies. Additionally, a deep learning technique based on attention mechanisms was used to extract stationary-state traffic data for the macroscopic calibration process, instead of directly using the entire aggregated data. We conducted experiments using real-world and synthetic trajectory data to validate our self-consistent calibration framework

    A Learnheuristic Approach to A Constrained Multi-Objective Portfolio Optimisation Problem

    Full text link
    Multi-objective portfolio optimisation is a critical problem researched across various fields of study as it achieves the objective of maximising the expected return while minimising the risk of a given portfolio at the same time. However, many studies fail to include realistic constraints in the model, which limits practical trading strategies. This study introduces realistic constraints, such as transaction and holding costs, into an optimisation model. Due to the non-convex nature of this problem, metaheuristic algorithms, such as NSGA-II, R-NSGA-II, NSGA-III and U-NSGA-III, will play a vital role in solving the problem. Furthermore, a learnheuristic approach is taken as surrogate models enhance the metaheuristics employed. These algorithms are then compared to the baseline metaheuristic algorithms, which solve a constrained, multi-objective optimisation problem without using learnheuristics. The results of this study show that, despite taking significantly longer to run to completion, the learnheuristic algorithms outperform the baseline algorithms in terms of hypervolume and rate of convergence. Furthermore, the backtesting results indicate that utilising learnheuristics to generate weights for asset allocation leads to a lower risk percentage, higher expected return and higher Sharpe ratio than backtesting without using learnheuristics. This leads us to conclude that using learnheuristics to solve a constrained, multi-objective portfolio optimisation problem produces superior and preferable results than solving the problem without using learnheuristics

    Energy Efficiency and Throughput Optimization in 5G Heterogeneous Networks

    Get PDF
    Device to device communication offers an optimistic technology for 5G network which aims to enhance data rate, reduce latency and cost, improve energy efficiency as well as provide other desired features. 5G heterogeneous network (5GHN) with a decoupled association strategy of downlink (DL) and uplink (UL) is an optimistic solution for challenges which are faced in 4G heterogeneous network (4GHN). Research work presented in this paper evaluates performance of 4GHN along with DL and UL coupled (DU-CP) access scheme in comparison with 5GHN with UL and DL decoupled (DU-DCP) access scheme in terms of energy efficiency and network throughput in 4-tier heterogeneous networks. Energy and throughput are optimized for both scenarios i.e. DU-CP and DU-DCP and the results are compared. Detailed performance analysis of DU-CP and DU-DCP access schemes has been done with the help of comparisons of results achieved by implementing genetic algorithm (GA) and particle swarm optimization (PSO). Both these algorithms are suited for the non linear problem under investigation where the search space is large. Simulation results have shown that the DU-DCP access scheme gives better performance as compared to DU-CP scheme in a 4-tier heterogeneous network in terms of network throughput and energy efficiency. PSO achieves an energy efficiency of 12 Mbits/joule for DU-CP and 42 Mbits/joule for DU-DCP, whereas GA yields an energy efficiency of 28 Mbits/joule for DU-CP and 55 Mbits/joule for DU-DCP. Performance of the proposed method is compared with that of three other schemes. The results show that the DU-DCP scheme using GA outperforms the compared methods

    A Literature Review of Fault Diagnosis Based on Ensemble Learning

    Get PDF
    The accuracy of fault diagnosis is an important indicator to ensure the reliability of key equipment systems. Ensemble learning integrates different weak learning methods to obtain stronger learning and has achieved remarkable results in the field of fault diagnosis. This paper reviews the recent research on ensemble learning from both technical and field application perspectives. The paper summarizes 87 journals in recent web of science and other academic resources, with a total of 209 papers. It summarizes 78 different ensemble learning based fault diagnosis methods, involving 18 public datasets and more than 20 different equipment systems. In detail, the paper summarizes the accuracy rates, fault classification types, fault datasets, used data signals, learners (traditional machine learning or deep learning-based learners), ensemble learning methods (bagging, boosting, stacking and other ensemble models) of these fault diagnosis models. The paper uses accuracy of fault diagnosis as the main evaluation metrics supplemented by generalization and imbalanced data processing ability to evaluate the performance of those ensemble learning methods. The discussion and evaluation of these methods lead to valuable research references in identifying and developing appropriate intelligent fault diagnosis models for various equipment. This paper also discusses and explores the technical challenges, lessons learned from the review and future development directions in the field of ensemble learning based fault diagnosis and intelligent maintenance

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
    corecore