644 research outputs found

    An Improved Artificial Intelligence Based on Gray Wolf Optimization and Cultural Algorithm to Predict Demand for Dairy Products: A Case Study

    Get PDF
    This paper provides an integrated framework based on statistical tests, time series neural network and improved multi-layer perceptron neural network (MLP) with novel meta-heuristic algorithms in order to obtain best prediction of dairy product demand (DPD) in Iran. At first, a series of economic and social indicators that seemed to be effective in the demand for dairy products is identified. Then, the ineffective indices are eliminated by using Pearson correlation coefficient, and statistically significant variables are determined. Then, MLP is improved with the help of novel meta-heuristic algorithms such as gray wolf optimization and cultural algorithm. The designed hybrid method is used to predict the DPD in Iran by using data from 2013 to 2017. The results show that the MLP offers 71.9% of the coefficient of determination, which is better compared to the other two methods if no improvement is achieved

    Soft Computing Techiniques for the Protein Folding Problem on High Performance Computing Architectures

    Get PDF
    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.This work is jointly supported by the FundaciónSéneca (Agencia Regional de Ciencia y Tecnología, Región de Murcia) under grants 15290/PI/2010 and 18946/JLI/13, by the Spanish MEC and European Commission FEDER under grant with reference TEC2012-37945-C02-02 and TIN2012-31345, by the Nils Coordinated Mobility under grant 012-ABEL-CM-2014A, in part financed by the European Regional Development Fund (ERDF). We also thank NVIDIA for hardware donation within UCAM GPU educational and research centers.Ingeniería, Industria y Construcció

    Super learner implementation in corrosion rate prediction

    Get PDF
    This thesis proposes a new machine learning model for predicting the corrosion rate of 3C steel in seawater. The corrosion rate of a material depends not just on the nature of the material but also on the material\u27s environmental conditions. The proposed machine learning model comes with a selection framework based on the hyperparameter optimization method and a performance evaluation metric to determine the models that qualify for further implementation in the proposed models’ ensembles architecture. The major aim of the selection framework is to select the least number of models that will fit efficiently (while already hyperparameter-optimized) into the architecture of the proposed model. Subsequently, the proposed predictive model is fitted on some portion of a dataset generated from an experiment on corrosion rate in five different seawater conditions. The remaining portion of this dataset is implemented in estimating the corrosion rate. Furthermore, the performance of the proposed models’ predictions was evaluated using three major performance evaluation metrics. These metrics were also used to evaluate the performance of two hyperparameter-optimized models (Smart Firefly Algorithm and Least Squares Support Vector Regression (SFA-LSSVR) and Support Vector Regression integrating Leave Out One Cross-Validation (SVR-LOOCV)) to facilitate their comparison with the proposed predictive model and its constituent models. The test results show that the proposed model performs slightly below the SFA-LSSVR model and above the SVR-LOOCV model by an RMSE score difference of 0.305 and RMSE score of 0.792. Despite its poor performance against the SFA-LSSVR model, the super learner model outperforms both hyperparameter-optimized models in the utilization of memory and computation time (graphically presented in this thesis)

    State-of-the-Art Using Bibliometric Analysis of Wind-Speed and -Power Forecasting Methods Applied in Power Systems

    Get PDF
    The integration of wind energy into power systems has intensified as a result of the urgency for global energy transition. This requires more accurate forecasting techniques that can capture the variability of the wind resource to achieve better operative performance of power systems. This paper presents an exhaustive review of the state-of-the-art of wind-speed and -power forecasting models for wind turbines located in different segments of power systems, i.e., in large wind farms, distributed generation, microgrids, and micro-wind turbines installed in residences and buildings. This review covers forecasting models based on statistical and physical, artificial intelligence, and hybrid methods, with deterministic or probabilistic approaches. The literature review is carried out through a bibliometric analysis using VOSviewer and Pajek software. A discussion of the results is carried out, taking as the main approach the forecast time horizon of the models to identify their applications. The trends indicate a predominance of hybrid forecast models for the analysis of power systems, especially for those with high penetration of wind power. Finally, it is determined that most of the papers analyzed belong to the very short-term horizon, which indicates that the interest of researchers is in this time horizon

    Perception architecture exploration for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions

    Hybrid data intelligent models and applications for water level prediction

    Get PDF
    Artificial intelligence (AI) models have been successfully applied in modeling engineering problems, including civil, water resources, electrical, and structure. The originality of the presented chapter is to investigate a non-tuned machine learning algorithm, called self-adaptive evolutionary extreme learning machine (SaE-ELM), to formulate an expert prediction model. The targeted application of the SaE-ELM is the prediction of river water level. Developing such water level prediction and monitoring models are crucial optimization tasks in water resources management and flood prediction. The aims of this chapter are (1) to conduct a comprehensive survey for AI models in water level modeling, (2) to apply a relatively new ML algorithm (i.e., SaE-ELM) for modeling water level, (3) to examine two different time scales (e.g., daily and monthly), and (4) to compare the inspected model with the extreme learning machine (ELM) model for validation. In conclusion, the contribution of the current chapter produced an expert and highly optimized predictive model that can yield a high-performance accuracy

    Soft Computing Approaches to Stock Forecasting: A Survey

    Get PDF
    Soft computing techniques has been effectively applied in business, engineering, medical domain to solve problems in the past decade. However, this paper focuses on censoring the application of soft computing techniques for stock market prediction in the last decade (2010 - todate). Over a hundred published articles on stock price prediction were reviewed. The survey is done by grouping these published articles into: the stock market surveyed, input variable choices, summary of modelling technique applied, comparative studies, and summary of performance measures. This survey aptly shows that soft computing techniques are widely used and it has demonstrated widely acceptability to accurately use for predicting stock price and stock index behavior worldwide

    Optimizing complexity weight parameter of use case points estimation using particle swarm optimization

    Get PDF
    Among algorithmic-based frameworks for software development effort estimation, Use Case Points I s one of the most used. Use Case Points is a well-known estimation framework designed mainly for object-oriented projects. Use Case Points uses the use case complexity weight as its essential parameter. The parameter is calculated with the number of actors and transactions of the use case. Nevertheless, use case complexity weight is discontinuous, which can sometimes result in inaccurate measurements and abrupt classification of the use case. The objective of this work is to investigate the potential of integrating particle swarm optimization (PSO) with the Use Case Points framework. The optimizer algorithm is utilized to optimize the modified use case complexity weight parameter. We designed and conducted an experiment based on real-life data set from three software houses. The proposed model’s accuracy and performance evaluation metric is compared with other published results, which are standardized accuracy, effect size, mean balanced residual error, mean inverted balanced residual error, and mean absolute error. Moreover, the existing models as the benchmark are polynomial regression, multiple linear regression, weighted case-based reasoning with (PSO), fuzzy use case points, and standard Use Case Points. Experimental results show that the proposed model generates the best value of standardized accuracy of 99.27% and an effect size of 1.15 over the benchmark models. The results of our study are promising for researchers and practitioners because the proposed model is actually estimating, not guessing, and generating meaningful estimation with statistically and practically significant
    • …
    corecore