12 research outputs found

    Modeling Color Fading Ozonation of Textile Using Artificial Intelligence

    Get PDF
    International audienceTextile products with faded effect achieved via ozonation are increasingly popular recently. In this study, the effect of ozonation in terms of pH, temperature, water pickup , time and applied colors on the color fading performance of reactive-dyed cotton are modeled using Extreme Learning Machine (ELM), Support Vector Regression (SVR) and Random Forest Regression (RF) respectively. It is found that RF and SVR perform better than ELM in this issue, but SVR is more recommended to be sued in the real application due to its balance predicting performance and less training time

    A comparison study between different kernel functions in the least square support vector regression model for penicillin fermentation process

    Get PDF
    Soft sensors are becoming increasingly important in our world today as tools for inferring difficult-to-measure process variables to achieve good operational performance and economic benefits. Recent advancement in machine learning provides an opportunity to integrate machine learning models for soft sensing applications, such as Least Square Support Vector Regression (LSSVR) which copes well with nonlinear process data. However, the LSSVR model usually uses the radial basis function (RBF) kernel function for prediction, which has demonstrated its usefulness in numerous applications. Thus, this study extends the use of non-conventional kernel functions in the LSSVR model with a comparative study against widely used partial least square (PLS) and principal component regression (PCR) models, measured with root mean square error (RMSE), mean absolute error (MAE) and error of approximation (Ea) as the performance benchmark. Based on the empirical result from the case study of the penicillin fermentation process, the Ea of the multiquadric kernel (MQ) is lowered by 63.44% as compared to the RBF kernel for the prediction of penicillin concentration. Hence, the MQ kernel LSSVR has outperformed the RBF kernel LSSVR. The study serves as empirical evidence of LSSVR performance as a machine learning model in soft sensing applications and as reference material for further development of non-conventional kernels in LSSVR-based models because many other functions can be used as well in the hope to increase the prediction accuracy

    Probability density function of bubble size based reagent dosage predictive control for copper roughing flotation

    Get PDF
    As an effective measurement indicator of bubble stability, bubble size structure is believed to be closely related to flotation performance in copper roughing flotation. Moreover, reagent dosage has a very important influence on bubble size structure. In this paper, a novel reagent dosage predictive control method based on probability density function (PDF) of bubble size is proposed to implement the indices of roughing circuit. Firstly, the froth images captured in the copper roughing are segmented by using a two-pass watershed algorithm. In order to characterize bubble size structure with non-Gaussian feature, an entropy based B-spline estimator is hence investigated to depict the PDF of the bubble size. Since the weights of B-spline are interrelated and related to the reagent dosage, a multi-output least square support vector machine (MLS-SVM) is applied to depict a dynamic relationship between the weights and the reagent dosage. Finally, an entropy based optimization algorithm is proposed to determine reagent dosage in order to implement tracking control for the PDF of the output bubble size. Experimental results can show the effectiveness of the proposed method

    Using deep learning for multivariate mapping of soil with quantified uncertainty

    Get PDF
    Digital soil mapping (DSM) techniques are widely employed to generate soil maps. Soil properties are typically predicted individually, while ignoring the interrelation between them. Models for predicting multiple propertiesexist, but they are computationally demanding and often fail to provide accurate description of the associated uncertainty. In this paper a convolutional neural network (CNN) model is described to predict several soil properties with quantified uncertainty. CNN has the advantage that it incorporates spatial contextual information of environmental covariates surrounding an observation. A single CNN model can be trained to predict multiple soil properties simultaneously. I further propose a two-step approach to estimate the uncertainty of the prediction for mapping using a neural network model. The methodology is tested mapping six soil properties on the French metropolitan territory using measurements from the LUCAS dataset and a large set of environmental covariates portraying the factors of soil formation. Results indicate that the multivariate CNN model produces accurate maps as shown by the coefficient of determination and concordance correlation coefficient, compared to a conventional machine learning technique. For this country extent mapping, the maps predicted by CNN have a detailed pattern with significant spatial variation. Evaluation of the uncertainty maps using the median of thestandardized squared prediction error and accuracy plots suggests that the uncertainty was accurately quantified, albeit slightly underestimated. The tests conducted using different window size of input covariates to predict the soil properties indicate that CNN benefits from using local contextual information in a radius of 4.5 km. I conclude that CNN is an effective model to predict several soil properties and that the associated uncertainty can be accurately quantified with the proposed approach

    DENCAST: distributed density-based clustering for multi-target regression

    Get PDF
    Recent developments in sensor networks and mobile computing led to a huge increase in data generated that need to be processed and analyzed efficiently. In this context, many distributed data mining algorithms have recently been proposed. Following this line of research, we propose the DENCAST system, a novel distributed algorithm implemented in Apache Spark, which performs density-based clustering and exploits the identified clusters to solve both single- and multi-target regression tasks (and thus, solves complex tasks such as time series prediction). Contrary to existing distributed methods, DENCAST does not require a final merging step (usually performed on a single machine) and is able to handle large-scale, high-dimensional data by taking advantage of locality sensitive hashing. Experiments show that DENCAST performs clustering more efficiently than a state-of-the-art distributed clustering algorithm, especially when the number of objects increases significantly. The quality of the extracted clusters is confirmed by the predictive capabilities of DENCAST on several datasets: It is able to significantly outperform (p-value <0.05<0.05 ) state-of-the-art distributed regression methods, in both single and multi-target settings

    Hybrid bootstrap-based approach with binary artificial bee colony and particle swarm optimization in Taguchi's T-Method

    Get PDF
    Taguchi's T-Method is one of the Mahalanobis Taguchi System (MTS)-ruled prediction techniques that has been established specifically but not limited to small, multivariate sample data. When evaluating data using a system such as the Taguchi's T-Method, bias issues often appear due to inconsistencies induced by model complexity, variations between parameters that are not thoroughly configured, and generalization aspects. In Taguchi's T-Method, the unit space determination is too reliant on the characteristics of the dependent variables with no appropriate procedures designed. Similarly, the least square-proportional coefficient is well known not to be robust to the effect of the outliers, which indirectly affects the accuracy of the weightage of SNR that relies on the model-fit accuracy. The small effect of the outliers in the data analysis may influence the overall performance of the predictive model unless more development is incorporated into the current framework. In this research, the mechanism of improved unit space determination was explicitly designed by implementing the minimum-based error with the leave-one-out method, which was further enhanced by embedding strategies that aim to minimize the impact of variance within each parameter estimator using the leave-one-out bootstrap (LOOB) and 0.632 estimates approaches. The complexity aspect of the prediction model was further enhanced by removing features that did not provide valuable information on the overall prediction. In order to accomplish this, a matrix called Orthogonal Array (OA) was used within the existing Taguchi's T-Method. However, OA's fixed-scheme matrix, as well as its drawback in coping with the high-dimensionality factor, leads to a sub- optimal solution. On the other hand, the usage of SNR, decibel (dB) as its objective function proved to be a reliable measure. The architecture of a Hybrid Binary Artificial Bee Colony and Particle Swarm Optimization (Hybrid Binary ABC-PSO), including the Binary Bitwise ABC (BitABC) and Probability Binary PSO (PBPSO), has been developed as a novel search engine that helps to cater the limitation of OA. The SNR (dB) and mean absolute error (MAE) were the main part of the performance measure used in this research. The generalization aspect was a fundamental addition incorporated into this research to control the effect of overfitting in the analysis. The proposed enhanced parameter estimators with feature selection optimization in this analysis had been tested on 10 case studies and had improved predictive accuracy by an average of 46.21% depending on the cases. The average standard deviation of MAE, which describes the variability impact of the optimized method in all 10 case studies, displayed an improved trend relative to the Taguchi’s T-Method. The need for standardization and a robust approach to outliers is recommended for future research. This study proved that the developed architecture of Hybrid Binary ABC-PSO with Bootstrap and minimum-based error using leave-one-out as the proposed parameter estimators enhanced techniques in the methodology of Taguchi's T-Method by effectively improving its prediction accuracy
    corecore