18,880 research outputs found
Data-driven Soft Sensors in the Process Industry
In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work
Application of Computational Intelligence Techniques to Process Industry Problems
In the last two decades there has been a large progress in the computational
intelligence research field. The fruits of the effort spent on the research in the discussed
field are powerful techniques for pattern recognition, data mining, data modelling, etc.
These techniques achieve high performance on traditional data sets like the UCI
machine learning database. Unfortunately, this kind of data sources usually represent
clean data without any problems like data outliers, missing values, feature co-linearity,
etc. common to real-life industrial data. The presence of faulty data samples can have
very harmful effects on the models, for example if presented during the training of the
models, it can either cause sub-optimal performance of the trained model or in the worst
case destroy the so far learnt knowledge of the model. For these reasons the application
of present modelling techniques to industrial problems has developed into a research
field on its own. Based on the discussion of the properties and issues of the data and the
state-of-the-art modelling techniques in the process industry, in this paper a novel
unified approach to the development of predictive models in the process industry is
presented
Nature-Inspired Adaptive Architecture for Soft Sensor Modelling
This paper gives a general overview of the challenges present in the research field of Soft Sensor
building and proposes a novel architecture for building of Soft Sensors, which copes with the identified challenges. The
architecture is inspired and making use of nature-related techniques for computational intelligence. Another aspect,
which is addressed by the proposed architecture, are the identified characteristics of the process industry data. The data
recorded in the process industry consist usually of certain amount of missing values or sample exceeding meaningful
values of the measurements, called data outliers. Other process industry data properties causing problems for the
modelling are the collinearity of the data, drifting data and the different sampling rates of the particular hardware
sensors. It is these characteristics which are the source of the need for an adaptive behaviour of Soft Sensors. The
architecture reflects this need and provides mechanisms for the adaptation and evolution of the Soft Sensor at different
levels. The adaptation capabilities are provided by maintaining a variety of rather simple models. These particular
models, called paths in terms of the architecture, can for example focus on different partition of the input data space, or
provide different adaptation speeds to changes in the data. The actual modelling techniques involved into the
architecture are data-driven computational learning approaches like artificial neural networks, principal component
regression, etc
The Hierarchic treatment of marine ecological information from spatial networks of benthic platforms
Measuring biodiversity simultaneously in different locations, at different temporal scales, and over wide spatial scales is of strategic importance for the improvement of our understanding of the functioning of marine ecosystems and for the conservation of their biodiversity. Monitoring networks of cabled observatories, along with other docked autonomous systems (e.g., Remotely Operated Vehicles [ROVs], Autonomous Underwater Vehicles [AUVs], and crawlers), are being conceived and established at a spatial scale capable of tracking energy fluxes across benthic and pelagic compartments, as well as across geographic ecotones. At the same time, optoacoustic imaging is sustaining an unprecedented expansion in marine ecological monitoring, enabling the acquisition of new biological and environmental data at an appropriate spatiotemporal scale. At this stage, one of the main problems for an effective application of these technologies is the processing, storage, and treatment of the acquired complex ecological information. Here, we provide a conceptual overview on the technological developments in the multiparametric generation, storage, and automated hierarchic treatment of biological and environmental information required to capture the spatiotemporal complexity of a marine ecosystem. In doing so, we present a pipeline of ecological data acquisition and processing in different steps and prone to automation. We also give an example of population biomass, community richness and biodiversity data computation (as indicators for ecosystem functionality) with an Internet Operated Vehicle (a mobile crawler). Finally, we discuss the software requirements for that automated data processing at the level of cyber-infrastructures with sensor calibration and control, data banking, and ingestion into large data portals.Peer ReviewedPostprint (published version
AI and OR in management of operations: history and trends
The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested
Overview of Remaining Useful Life prediction techniques in Through-life Engineering Services
Through-life Engineering Services (TES) are essential in the manufacture and servicing of complex engineering products. TES improves support services by providing prognosis of run-to-failure and time-to-failure on-demand data for better decision making. The concept of Remaining Useful Life (RUL) is utilised to predict life-span of components (of a service system) with the purpose of minimising catastrophic failure events in both manufacturing and service sectors. The purpose of this paper is to identify failure mechanisms and emphasise the failure events prediction approaches that can effectively reduce uncertainties. It will demonstrate the classification of techniques used in RUL prediction for optimisation of products’ future use based on current products in-service with regards to predictability, availability and reliability. It presents a mapping of degradation mechanisms against techniques for knowledge acquisition with the objective of presenting to designers and manufacturers ways to improve the life-span of components
On the role of pre and post-processing in environmental data mining
The quality of discovered knowledge is highly depending on data quality. Unfortunately real data use to contain noise, uncertainty, errors, redundancies or even irrelevant information. The more complex is the reality to be analyzed, the higher the risk of getting low quality data. Knowledge Discovery from Databases (KDD) offers a global framework to prepare data in the right form to perform correct analyses. On the other hand, the quality of decisions taken upon KDD results, depend not only on the quality of the results themselves, but on the capacity of the system to communicate those results in an understandable form. Environmental systems are particularly complex and environmental users particularly require clarity in their results. In this paper some details about how this can be achieved are provided. The role of the pre and post processing in the whole process of Knowledge Discovery in environmental systems is discussed
- …