4,786 research outputs found

    Data-driven Soft Sensors in the Process Industry

    Get PDF
    In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Drift Correction Methods for gas Chemical Sensors in Artificial Olfaction Systems: Techniques and Challenges

    Get PDF
    In this chapter the authors introduce the main challenges faced when developing drift correction techniques and will propose a deep overview of state-of-the-art methodologies that have been proposed in the scientific literature trying to underlying pros and cons of these techniques and focusing on challenges still open and waiting for solution

    Design Issues and Challenges of File Systems for Flash Memories

    Get PDF
    This chapter discusses how to properly address the issues of using NAND flash memories as mass-memory devices from the native file system standpoint. We hope that the ideas and the solutions proposed in this chapter will be a valuable starting point for designers of NAND flash-based mass-memory devices

    Signal and data processing for machine olfaction and chemical sensing: A review

    Get PDF
    Signal and data processing are essential elements in electronic noses as well as in most chemical sensing instruments. The multivariate responses obtained by chemical sensor arrays require signal and data processing to carry out the fundamental tasks of odor identification (classification), concentration estimation (regression), and grouping of similar odors (clustering). In the last decade, important advances have shown that proper processing can improve the robustness of the instruments against diverse perturbations, namely, environmental variables, background changes, drift, etc. This article reviews the advances made in recent years in signal and data processing for machine olfaction and chemical sensing

    Incipient fault detection and isolation of sensors and field devices

    Get PDF
    The purpose of this research is to develop a robust fault detection and isolation method, for detecting faults in process sensors, actuators, controllers and other field devices. The approach to the solution to this problem is summarized below. A novel approach for the validation of control system components and sensors was developed in this research. The process is composed of detecting a system anomaly, isolating the faulty component (such as sensors, actuators, and controllers), computing its deviation from expected value for a given system\u27s normal condition, and finally reconstructing its output when applicable. A variant of the Group Method of Data Handling (GMDH) was developed in this research for generating analytical redundancy from relationships among different system components. A rational function approximation was used for the data-driven modeling scheme. This analytical redundancy is necessary for detecting system anomalies and isolating faulty components. A rule-base expert system was developed in order to isolate the faulty component. The rule-based was established from model-simulated data. A fuzzy-logic estimator was implemented to compute the magnitude of the loop component fault so that the operator or the controller might take corrective actions. This latter engine allows the system to be operated in a normal condition until the next scheduled shutdown, even if a critical component were detected as degrading. The effectiveness of the method developed in this research was demonstrated through simulation and by implementation to an experimental control loop. The test loop consisted of a level control system, flow, pressure, level and temperature measuring sensors, motor-operated valves, and a pump. Commonly observed device faults were imposed in different system components such as pressure transmitters, pumps, and motor-operated valves. This research has resulted in a framework for system component failure detection and isolation, allowing easy implementation of this method in any process control system (power plants, chemical industry, and other manufacturing industry). The technique would also aid the plant personnel in defining the minimal number of sensors to be installed in a process system, necessary for reliable component validation

    Computational methods for metabolomic data analysis of ion mobility spectrometry data-reviewing the state of the art

    Get PDF
    Ion mobility spectrometry combined with multi-capillary columns (MCC/IMS) is a well known technology for detecting volatile organic compounds (VOCs). We may utilize MCC/IMS for scanning human exhaled air, bacterial colonies or cell lines, for example. Thereby we gain information about the human health status or infection threats. We may further study the metabolic response of living cells to external perturbations. The instrument is comparably cheap, robust and easy to use in every day practice. However, the potential of the MCC/IMS methodology depends on the successful application of computational approaches for analyzing the huge amount of emerging data sets. Here, we will review the state of the art and highlight existing challenges. First, we address methods for raw data handling, data storage and visualization. Afterwards we will introduce de-noising, peak picking and other pre-processing approaches. We will discuss statistical methods for analyzing correlations between peaks and diseases or medical treatment. Finally, we study up-to-date machine learning techniques for identifying robust biomarker molecules that allow classifying patients into healthy and diseased groups. We conclude that MCC/IMS coupled with sophisticated computational methods has the potential to successfully address a broad range of biomedical questions. While we can solve most of the data pre-processing steps satisfactorily, some computational challenges with statistical learning and model validation remain

    On the Slow Drift of Solstices: Milankovic Cycles and Mean Global Temperature

    Full text link
    The Earth's revolution is modified by changes in inclination of its rotation axis. Despite the fact that the gravity field is central, the Earth's trajectory is not closed and the equinoxes drift. Milankovic (1920) argued that the shortest precession period of solstices is 20,7kyr: the Summer solstice in one hemisphere takes place alternately every 11kyr at perihelion and at aphelion. We have submitted the time series for the Earth's pole of rotation, global mean surface temperature and ephemeris to iterative Singular Spectrum Analysis. iSSA extracts from each a trend, a 1yr and a 60yr component. Both the apparent drift of solstices of Earth around the Sun and the global mean temperature exhibit a strong 60yr oscillation. The "fixed dates" of solstices actually drift. Comparing the time evolution of the Winter and Summer solstices positions of the rotation pole and the first iSSA component (trend) of the temperature allows one to recognize some common features. A basic equation from Milankovic links the derivative of heat received at a given location on Earth to solar insolation, known functions of the location coordinates, solar declination and hour angle, with an inverse square dependence on the Sun-Earth distance. We have translated the drift of solstices as a function of distance to the Sun into the geometrical insolation theory of Milankovic. Shifting the inverse square of the 60yr iSSA drift of solstices by 15 years with respect to the first derivative of the 60yr iSSA trend of temperature, that is exactly a quadrature in time, puts the two curves in quasi-exact superimposition. The probability of a chance coincidence appears very low. Correlation does not imply causality when there is no accompanying model. Here Milankovic's equation can be considered as a model that is widely accepted. This paper identifies a case of agreement between observations and a mathematical formulation

    On-Line Learning and Wavelet-Based Feature Extraction Methodology for Process Monitoring using High-Dimensional Functional Data

    Get PDF
    The recent advances in information technology, such as the various automatic data acquisition systems and sensor systems, have created tremendous opportunities for collecting valuable process data. The timely processing of such data for meaningful information remains a challenge. In this research, several data mining methodology that will aid information streaming of high-dimensional functional data are developed. For on-line implementations, two weighting functions for updating support vector regression parameters were developed. The functions use parameters that can be easily set a priori with the slightest knowledge of the data involved and have provision for lower and upper bounds for the parameters. The functions are applicable to time series predictions, on-line predictions, and batch predictions. In order to apply these functions for on-line predictions, a new on-line support vector regression algorithm that uses adaptive weighting parameters was presented. The new algorithm uses varying rather than fixed regularization constant and accuracy parameter. The developed algorithm is more robust to the volume of data available for on-line training as well as to the relative position of the available data in the training sequence. The algorithm improves prediction accuracy by reducing uncertainty in using fixed values for the regression parameters. It also improves prediction accuracy by reducing uncertainty in using regression values based on some experts’ knowledge rather than on the characteristics of the incoming training data. The developed functions and algorithm were applied to feedwater flow rate data and two benchmark time series data. The results show that using adaptive regression parameters performs better than using fixed regression parameters. In order to reduce the dimension of data with several hundreds or thousands of predictors and enhance prediction accuracy, a wavelet-based feature extraction procedure called step-down thresholding procedure for identifying and extracting significant features for a single curve was developed. The procedure involves transforming the original spectral into wavelet coefficients. It is based on multiple hypothesis testing approach and it controls family-wise error rate in order to guide against selecting insignificant features without any concern about the amount of noise that may be present in the data. Therefore, the procedure is applicable for data-reduction and/or data-denoising. The procedure was compared to six other data-reduction and data-denoising methods in the literature. The developed procedure is found to consistently perform better than most of the popular methods and performs at the same level with the other methods. Many real-world data with high-dimensional explanatory variables also sometimes have multiple response variables; therefore, the selection of the fewest explanatory variables that show high sensitivity to predicting the response variable(s) and low sensitivity to the noise in the data is important for better performance and reduced computational burden. In order to select the fewest explanatory variables that can predict each of the response variables better, a two-stage wavelet-based feature extraction procedure is proposed. The first stage uses step-down procedure to extract significant features for each of the curves. Then, representative features are selected out of the extracted features for all curves using voting selection strategy. Other selection strategies such as union and intersection were also described and implemented. The essence of the first stage is to reduce the dimension of the data without any consideration for whether or not they can predict the response variables accurately. The second stage uses Bayesian decision theory approach to select some of the extracted wavelet coefficients that can predict each of the response variables accurately. The two stage procedure was implemented using near-infrared spectroscopy data and shaft misalignment data. The results show that the second stage further reduces the dimension and the prediction results are encouraging
    corecore