3,792 research outputs found

    Towards automatic composition of multicomponent predictive systems.

    Get PDF
    Automatic composition and parametrisation of multicomponent predictive systems (MCPSs) consisting of chains of data transformation steps is a challenging task. In this paper we propose and describe an extension to the Auto-WEKA software which now allows to compose and optimise such flexible MCPSs by using a sequence of WEKA methods. In the experimental analysis we focus on examining the impact of significantly extending the search space by incorporating additional hyperparameters of the models, on the quality of the found solutions. In a range of extensive experiments three different optimisation strategies are used to automatically compose MCPSs on 21 publicly available datasets. A comparison with previous work indicates that extending the search space improves the classification accuracy in the majority of the cases. The diversity of the found MCPSs are also an indication that fully and automatically exploiting different combinations of data cleaning and preprocessing techniques is possible and highly beneficial for different predictive models. This can have a big impact on high quality predictive models development, maintenance and scalability aspects needed in modern application and deployment scenarios

    Adapting Multicomponent Predictive Systems using Hybrid Adaptation Strategies with Auto-WEKA in Process Industry

    Get PDF
    Automation of composition and optimisation of multicomponent predictive systems (MCPSs) made of a number of preprocessing steps and predictive models is a challenging problem that has been addressed in recent works. However, one of the current challenges is how to adapt these systems in dynamic environments where data is changing over time. In this work we propose a hybrid approach combining different adaptation strategies with the Bayesian optimisation techniques for parametric, structural and hyperparameter optimisation of entire MCPSs. Experiments comparing different adaptation strategies have been performed on 7 datasets from real chemical production processes. Experimental analysis shows that optimisation of entire MCPSs as a method of adaptation to changing environments is feasible and that hybrid strategies perform better in most of the analysed cases

    Effects of change propagation resulting from adaptive preprocessing in multicomponent predictive systems

    Get PDF
    Predictive modelling is a complex process that requires a number of steps to transform raw data into predictions. Preprocessing of the input data is a key step in such process, and the selection of proper preprocessing methods is often a labour intensive task. Such methods are usually trained offline and their parameters remain fixed during the whole model deployment lifetime. However, preprocessing of non-stationary data streams is more challenging since the lack of adaptation of such preprocessing methods may degrade system performance. In addition, dependencies between different predictive system components make the adaptation process more challenging. In this paper we discuss the effects of change propagation resulting from using adaptive preprocessing in a Multicomponent Predictive System (MCPS). To highlight various issues we present four scenarios with different levels of adaptation. A number of experiments have been performed with a range of datasets to compare the prediction error in all four scenarios. Results show that well managed adaptation considerably improves the prediction performance. However, the model can become inconsistent if adaptation in one component is not correctly propagated throughout the rest of system components. Sometimes, such inconsistency may not cause an obvious deterioration in the system performance, therefore being difficult to detect. In some other cases it may even lead to a system failure as was observed in our experiments

    Structural Material Property Tailoring Using Deep Neural Networks

    Full text link
    Advances in robotics, artificial intelligence, and machine learning are ushering in a new age of automation, as machines match or outperform human performance. Machine intelligence can enable businesses to improve performance by reducing errors, improving sensitivity, quality and speed, and in some cases achieving outcomes that go beyond current resource capabilities. Relevant applications include new product architecture design, rapid material characterization, and life-cycle management tied with a digital strategy that will enable efficient development of products from cradle to grave. In addition, there are also challenges to overcome that must be addressed through a major, sustained research effort that is based solidly on both inferential and computational principles applied to design tailoring of functionally optimized structures. Current applications of structural materials in the aerospace industry demand the highest quality control of material microstructure, especially for advanced rotational turbomachinery in aircraft engines in order to have the best tailored material property. In this paper, deep convolutional neural networks were developed to accurately predict processing-structure-property relations from materials microstructures images, surpassing current best practices and modeling efforts. The models automatically learn critical features, without the need for manual specification and/or subjective and expensive image analysis. Further, in combination with generative deep learning models, a framework is proposed to enable rapid material design space exploration and property identification and optimization. The implementation must take account of real-time decision cycles and the trade-offs between speed and accuracy

    Interfacial tension of reservoir fluids : an integrated experimental and modelling investigation

    Get PDF
    Interfacial tension (IFT) is a property of paramount importance in many technical areas as it deals with the forces acting at the interface whenever two immiscible or partially miscible phases are in contact. With respect to petroleum engineering operations, it influences most, if not all, multiphase processes associated with the extraction and refining of Oil and Gas, from the optimisation of reservoir engineering strategies to the design of petrochemical facilities. This property is also of key importance for the development of successful and economical CO2 geological storage projects as it controls, to a large extent, the amount of CO2 that can be safely stored in a target reservoir. Therefore, an accurate knowledge of the IFT of reservoir fluids is needed. Aiming at filling the experimental gap found in literature and extending the measurement of this property to reservoir conditions, the present work contributes with fundamental IFT data of binary and multicomponent synthetic reservoir fluids. Two new setups have been developed, validated and used to study the impact of high pressures (up to 69 MPa) and high temperatures (up to 469 K) on the IFT of hydrocarbon systems including n-alkanes and main gas components such as CH4, CO2, and N2, as well as of the effect sparingly soluble gaseous impurities and NaCl on the IFT of water and CO2 systems. Saturated density data of the phases, required to determine pertinent IFT values, have also been measured with a vibrating U-tube densitometer. Results indicated a strong dependence of the IFT values with temperature, pressure, phase density and salt concentration, whereas changes on the IFT due to the presence of up to 10 mole% gaseous impurities (sparingly soluble in water) laid very close to experimental uncertainties. Additionally, the predictive capabilities of classical methods for computing IFT values have been compared to a more robust theoretical approach, the Density Gradient Theory (DGT), as well as to experimental data measured in this work and collected from literature. Results demonstrated the superior capabilities of the DGT for accurately predicting the IFT of synthetic hydrocarbon mixtures and of a real petroleum fluid with no further adjustable parameters for mixtures. In the case of aqueous systems, one binary interaction coefficient, estimated with the help of a single experimental data point, allowed the correct description of the IFT of binary and multicomponent systems in both two- and three-phase equilibria conditions, as well as the impact of salts with the DGT

    Modelling Multi-Component Predictive Systems as Petri Nets

    Get PDF
    Building reliable data-driven predictive systems requires a considerable amount of human effort, especially in the data preparation and cleaning phase. In many application domains, multiple preprocessing steps need to be applied in sequence, constituting a ‘workflow’ and facilitating reproducibility. The concatenation of such work- flow with a predictive model forms a Multi-Component Predictive System (MCPS). Automatic MCPS composition can speed up this process by taking the human out of the loop, at the cost of model transparency. In this paper, we adopt and suitably re-define the Wellhandled with Regular Iterations Work Flow (WRI-WF) Petri nets to represent MCPSs. The use of such WRIWF nets helps to increase the transparency of MCPSs required in industrial applications and make it possible to automatically verify the composed workflows. We also present our experience and results of applying this representation to model soft sensors in chemical production plants

    Theory and Application of Nonlinear Wave Propagation Phenomena in Combined Reaction/Separation Processes

    Get PDF
    Reaction separation processes, reactive distillation, chromatographic reactor, equilibrium theory, nonlinear waves, process control, observer design, asymptoticaly exact input/output-linearizationMagdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2007von Stefan Grüne

    Reprocessing of the 2 Hz P-P data from the Blackfoot 3C-2D survey with special reference to multicomponent seismic processing

    Get PDF
    The 3C-2D multicomponent seismic survey was acquired in the Blackfoot Field, which is located 15 km southeast of Strathmore, Alberta. The 2 Hz vertical component data were reprocessed to better image the target reservoir, the Glauconitic sand formation at approximately 1150 ms. The zone of the major reflector at about 1550 ms was also taken into consideration to determine the better seismic section for further processing. The main characteristics of the converted wave processing, such as common conversion point (CCP), asymptotic binning, gamma ratios, receiver statics, rotation of the data, positive/negative offsets, and anisotropy, were also summarized in this study. The processing chart was determined by testing a number of parameters and methods. Different types of deconvolution, static corrections, and migrations were applied to the seismic data to better image the target zone. The noise attenuation steps reduced the ground roll noise and increased the signal-to-noise ratio. Two-dimensional Kirchhoff prestack and poststack time migration were utilized to image the subsurface. Since the lithology discrimination between the target reservoir and the neighboring shales is the main issue in the study area, the spectral whitening was applied to the migrated sections to increase the resolution and the appearance of the reservoir. In this context, the obtained results suggest that the prestack time migration offered stronger amplitudes in the target zone, and the channel cuts were observed clearer than the poststack migration --Abstract, page iii
    corecore