292 research outputs found

    End-point prediction of basic oxygen furnace (BOF) steelmaking based on improved twin support vector regression

    Get PDF
    In this paper, a novel prediction method for low carbon steel is proposed based on an improved twin support vector regression algorithm. 300 qualified samples are collected by the sublance measurements from the real plant. The simulation results show that the prediction models can achieve a hit rate of 96 % for carbon content within the error bound of 0,005 % and 94 % for temperature within the error bound of 15 °C. The double hit rate reaches to 90 %. It indicates that the proposed method can provide a significant reference for real BOF applications, and also it can be extended to the prediction of other metallurgical industries

    Prediction of alloy addition in ladle furnace (LF) based on LWOA-SCN

    Get PDF
    The amount of alloy added during the LF refining process affects the hit rate of steel composition control. Therefore, improving the accuracy of the alloy addition amount can help improve efficiency and reduce production costs. To address the existing problem of inaccurate alloy addition in the refining process, the group established an alloy addition prediction model based on an improved whale swarm optimization algorithm and stochastic configuration network (LWOA-SCN) with the historical smelting data of a steel mill. The model can effectively improve the prediction accuracy and convergence speed of the model. The research results show that the model is more advantageous in improving the hit rate of alloy addition, which provides theoretical guidance for practical production

    A multi-class classification model with parametrized target outputs for randomized-based feedforward neural networks

    Get PDF
    Randomized-based Feedforward Neural Networks approach regression and classification (binary and multi-class) problems by minimizing the same optimization problem. Specifically, the model parameters are determined through the ridge regression estimator of the patterns projected in the hidden layer space (randomly generated in its neural network version) for models without direct links and the patterns projected in the hidden layer space along with the original input data for models with direct links. The targets are encoded for the multi-class classification problem according to the 1- of-J encoding (J the number of classes), which implies that the model parameters are estimated to project all the patterns belonging to its corresponding class to one and the remaining to zero. This approach has several drawbacks, which motivated us to propose an alternative optimization model for the framework. In the proposed optimization model, model parameters are estimated for each class so that their patterns are projected to a reference point (also optimized during the process), whereas the remaining patterns (not belonging to that class) are projected as far away as possible from the reference point. The final problem is finally presented as a generalized eigenvalue problem. Four models are then presented: the neural network version of the algorithm and its corresponding kernel version for the neural networks models with and without direct links. In addition, the optimization model has also been implemented in randomization-based multi-layer or deep neural networks. The empirical results obtained by the proposed models were compared to those reported by state-ofthe-art models in the correct classification rate and a separability index (which measures the degree of separability in projection terms per class of the patterns belonging to the class of the others). The proposed methods show very competitive performance in the separability index and prediction accuracy compared to the neural networks version of the comparison methods (with and without direct links). Remarkably, the model provides significantly superior performance in deep models with direct links compared to its deep model counterpart

    A multi-class classification model with parametrized target outputs for randomized-based feedforward neural networks.

    Get PDF
    Randomized-based Feedforward Neural Networks approach regression and classification (binary and multi-class) problems by minimizing the same optimization problem. Specifically, the model parameters are determined through the ridge regression estimator of the patterns projected in the hidden layer space (randomly generated in its neural network version) for models without direct links and the patterns projected in the hidden layer space along with the original input data for models with direct links. The targets are encoded for the multi-class classification problem according to the 1-of- encoding ( the number of classes), which implies that the model parameters are estimated to project all the patterns belonging to its corresponding class to one and the remaining to zero. This approach has several drawbacks, which motivated us to propose an alternative optimization model for the framework. In the proposed optimization model, model parameters are estimated for each class so that their patterns are projected to a reference point (also optimized during the process), whereas the remaining patterns (not belonging to that class) are projected as far away as possible from the reference point. The final problem is finally presented as a generalized eigenvalue problem. Four models are then presented: the neural network version of the algorithm and its corresponding kernel version for the neural networks models with and without direct links. In addition, the optimization model has also been implemented in randomization-based multi-layer or deep neural networks.Funding for open access charge: Universidad de Málaga / CBU

    Data-Driven and Hybrid Methods for Naval Applications

    Get PDF
    The goal of this PhD thesis is to study, design and develop data analysis methods for naval applications. Data analysis is improving our ways to understand complex phenomena by profitably taking advantage of the information laying behind a collection of data. In fact, by adopting algorithms coming from the world of statistics and machine learning it is possible to extract valuable information, without requiring specific domain knowledge of the system generating the data. The application of such methods to marine contexts opens new research scenarios, since typical naval problems can now be solved with higher accuracy rates with respect to more classical techniques, based on the physical equations governing the naval system. During this study, some major naval problems have been addressed adopting state-of-the-art and novel data analysis techniques: condition-based maintenance, consisting in assets monitoring, maintenance planning, and real-time anomaly detection; energy and consumption monitoring, in order to reduce vessel consumption and gas emissions; system safety for maneuvering control and collision avoidance; components design, in order to detect possible defects at design stage. A review of the state-of-the-art of data analysis and machine learning techniques together with the preliminary results of the application of such methods to the aforementioned problems show a growing interest in these research topics and that effective data-driven solutions can be applied to the naval context. Moreover, for some applications, data-driven models have been used in conjunction with domain-dependent methods, modelling physical phenomena, in order to exploit both mechanistic knowledge of the system and available measurements. These hybrid methods are proved to provide more accurate and interpretable results with respect to both the pure physical or data-driven approaches taken singularly, thus showing that in the naval context it is possible to offer new valuable methodologies by either providing novel statistical methods or improving the state-of-the-art ones

    Graph learning and its applications : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science, Massey University, Albany, Auckland, New Zealand

    Get PDF
    Since graph features consider the correlations between two data points to provide high-order information, i.e., more complex correlations than the low-order information which considers the correlations in the individual data, they have attracted much attention in real applications. The key of graph feature extraction is the graph construction. Previous study has demonstrated that the quality of the graph usually determines the effectiveness of the graph feature. However, the graph is usually constructed from the original data which often contain noise and redundancy. To address the above issue, graph learning is designed to iteratively adjust the graph and model parameters so that improving the quality of the graph and outputting optimal model parameters. As a result, graph learning has become a very popular research topic in traditional machine learning and deep learning. Although previous graph learning methods have been applied in many fields by adding a graph regularization to the objective function, they still have some issues to be addressed. This thesis focuses on the study of graph learning aiming to overcome the drawbacks in previous methods for different applications. We list the proposed methods as follows. • We propose a traditional graph learning method under supervised learning to consider the robustness and the interpretability of graph learning. Specifically, we propose utilizing self-paced learning to assign important samples with large weights, conducting feature selection to remove redundant features, and learning a graph matrix from the low dimensional data of the original data to preserve the local structure of the data. As a consequence, both important samples and useful features are used to select support vectors in the SVM framework. • We propose a traditional graph learning method under semi-supervised learning to explore parameter-free fusion of graph learning. Specifically, we first employ the discrete wavelet transform and Pearson correlation coefficient to obtain multiple fully connected Functional Connectivity brain Networks (FCNs) for every subject, and then learn a sparsely connected FCN for every subject. Finally, the ℓ1-SVM is employed to learn the important features and conduct disease diagnosis. • We propose a deep graph learning method to consider graph fusion of graph learning. Specifically, we first employ the Simple Linear Iterative Clustering (SLIC) method to obtain multi-scale features for every image, and then design a new graph fusion method to fine-tune features of every scale. As a result, the multi-scale feature fine-tuning, graph learning, and feature learning are embedded into a unified framework. All proposed methods are evaluated on real-world data sets, by comparing to state-of-the-art methods. Experimental results demonstrate that our methods outperformed all comparison methods

    Multi-task Sparse Structure Learning With Gaussian Copula Models

    Get PDF
    Conselho Nacional de Desenvolvimento CientĂ­fico e TecnolĂłgico (CNPq)Conselho Nacional de Desenvolvimento CientĂ­fico e TecnolĂłgico (CNPq)Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. While sometimes the underlying task relationship structure is known, often the structure needs to be estimated from data at hand. In this paper, we present a novel family of models for MTL, applicable to regression and classification problems, capable of learning the structure of tasks relationship. In particular, we consider a joint estimation problem of the tasks relationship structure and the individual task parameters, which is solved using alternating minimization. The task relationship revealed by structure learning is founded on recent advances in Gaussian graphical models endowed with sparse estimators of the precision (inverse covariance) matrix. An extension to include flexible Gaussian copula models that relaxes the Gaussian marginal assumption is also proposed. We illustrate the e ff ectiveness of the proposed model on a variety of synthetic and benchmark data sets for regression and classi fi cation. We also consider the problem of combining Earth System Model (ESM) outputs for better projections of future climate, with focus on projections of temperature by combining ESMs in South and North America, and show that the proposed model outperforms several existing methods for the problem.17NSF [IIS-1029711, IIS-0916750, IIS-0953274, CNS-1314560, IIS-1422557, CCF-1451986, IIS-1447566]NASA [NNX12AQ39A]IBMYahooCNPqCNPq, BrazilConselho Nacional de Desenvolvimento CientĂ­fico e TecnolĂłgico (CNPq)Conselho Nacional de Desenvolvimento CientĂ­fico e TecnolĂłgico (CNPq
    • …
    corecore