33,268 research outputs found
Diesel engine fuel injection monitoring using acoustic measurements and independent component analysis
Air-borne acoustic based condition monitoring is a promising technique because of its intrusive nature and the rich information contained within the acoustic signals including all sources. However, the back ground noise contamination, interferences and the number of Internal Combustion Engine ICE vibro-acoustic sources preclude the extraction of condition information using this technique. Therefore, lower energy events; such as fuel injection, are buried within higher energy events and/or corrupted by background noise.
This work firstly investigates diesel engine air-borne acoustic signals characteristics and the benefits of joint time-frequency domain analysis. Secondly, the air-borne acoustic signals in the vicinity of injector head were recorded using three microphones around the fuel injector (120° apart from each other) and an Independent Component Analysis (ICA) based scheme was developed to decompose these acoustic signals. The fuel injection process characteristics were thus revealed in the time-frequency domain using Wigner-Ville distribution (WVD) technique. Consequently the energy levels around the injection process period between 11 and 5 degrees before the top dead center and of frequency band 9 to 15 kHz are calculated. The developed technique was validated by simulated signals and empirical measurements at different injection pressure levels from 250 to 210 bars in steps of 10 bars. The recovered energy levels in the tested conditions were found to be affected by the injector pressure settings
A self-validating control system based approach to plant fault detection and diagnosis
An approach is proposed in which fault detection and diagnosis (FDD) tasks are distributed to separate FDD modules associated with each control system located throughout a plant. Intended specifically for those control systems that inherently eliminate steady state error, it is modular, steady state based, requires very little process specific information and therefore should be attractive to control systems implementers who seek economies of scale. The approach is applicable to virtually all types of process plant, whether they are open loop stable or not, have a type or class number of zero or not and so on. Based on qualitative reasoning, the approach is founded on the application of control systems theory to single and cascade control systems with integral action. This results in the derivation of cause-effect knowledge and fault isolation procedures that take into account factors like interactions between control systems, and the availability of non-control-loop-based sensors
Online coherency identification and stability condition for large interconnected power systems using an unsupervised data mining technique
Identification of coherent generators and the determination of the stability system condition in large interconnected power system is one of the key steps to carry out different control system strategies to avoid a partial or complete blackout of a power system. However, the oscillatory trends, the larger amount data available and the non-linear dynamic behaviour of the frequency measurements often mislead the appropriate knowledge of the actual coherent groups, making wide-area coherency monitoring a challenging task. This paper presents a novel online unsupervised data mining technique to identify coherent groups, to detect the power system disturbance event and determine status stability condition of the system. The innovative part of the proposed approach resides on combining traditional plain algorithms such as singular value decomposition (SVD) and K -means for clustering together with new concept based on clustering slopes. The proposed combination provides an added value to other applications relying on similar algorithms available in the literature. To validate the effectiveness of the proposed method, two case studies are presented, where data is extracted from the large and comprehensive initial dynamic model of ENTSO-E and the results compared to other alternative methods available in the literature
Exploiting Evolution for an Adaptive Drift-Robust Classifier in Chemical Sensing
Gas chemical sensors are strongly affected by drift, i.e., changes in sensors' response with time, that may turn statistical models commonly used for classification completely useless after a period of time. This paper presents a new classifier that embeds an adaptive stage able to reduce drift effects. The proposed system exploits a state-of-the-art evolutionary strategy to iteratively tweak the coefficients of a linear transformation able to transparently transform raw measures in order to mitigate the negative effects of the drift. The system operates continuously. The optimal correction strategy is learnt without a-priori models or other hypothesis on the behavior of physical-chemical sensors. Experimental results demonstrate the efficacy of the approach on a real problem
Intelligent Fault Analysis in Electrical Power Grids
Power grids are one of the most important components of infrastructure in
today's world. Every nation is dependent on the security and stability of its
own power grid to provide electricity to the households and industries. A
malfunction of even a small part of a power grid can cause loss of
productivity, revenue and in some cases even life. Thus, it is imperative to
design a system which can detect the health of the power grid and take
protective measures accordingly even before a serious anomaly takes place. To
achieve this objective, we have set out to create an artificially intelligent
system which can analyze the grid information at any given time and determine
the health of the grid through the usage of sophisticated formal models and
novel machine learning techniques like recurrent neural networks. Our system
simulates grid conditions including stimuli like faults, generator output
fluctuations, load fluctuations using Siemens PSS/E software and this data is
trained using various classifiers like SVM, LSTM and subsequently tested. The
results are excellent with our methods giving very high accuracy for the data.
This model can easily be scaled to handle larger and more complex grid
architectures.Comment: In proceedings of the 29th IEEE International Conference on Tools
with Artificial Intelligence (ICTAI) 2017 (full paper); 6 pages; 13 figure
Damage identification in structural health monitoring: a brief review from its implementation to the Use of data-driven applications
The damage identification process provides relevant information about the current state of a structure under inspection, and it can be approached from two different points of view. The first approach uses data-driven algorithms, which are usually associated with the collection of data using sensors. Data are subsequently processed and analyzed. The second approach uses models to analyze information about the structure. In the latter case, the overall performance of the approach is associated with the accuracy of the model and the information that is used to define it. Although both approaches are widely used, data-driven algorithms are preferred in most cases because they afford the ability to analyze data acquired from sensors and to provide a real-time solution for decision making; however, these approaches involve high-performance processors due to the high computational cost. As a contribution to the researchers working with data-driven algorithms and applications, this work presents a brief review of data-driven algorithms for damage identification in structural health-monitoring applications. This review covers damage detection, localization, classification, extension, and prognosis, as well as the development of smart structures. The literature is systematically reviewed according to the natural steps of a structural health-monitoring system. This review also includes information on the types of sensors used as well as on the development of data-driven algorithms for damage identification.Peer ReviewedPostprint (published version
Towards continuous biomanufacturing a computational approach for the intensification of monoclonal antibody production
Current industrial trends encourage the development of sustainable, environmentally friendly processes with reduced energy and raw material consumption. Meanwhile, the increasing market demand as well as the tight regulations in product quality, necessitate efficient operating procedures that guarantee products of high purity. In this direction, process intensification via continuous operation paves the way for the development of novel, eco-friendly processes, characterized by higher productivity compared to batch (Nicoud, 2014). The shift towards continuous operation could advance the market of high value biologics, such as monoclonal antibodies (mAbs), as it would lead to shorter production times, decreased costs, as well as significantly less energy consumption (Konstantinov and Cooney, 2015, Xenopoulos, 2015). In particular, mAb production comprises two main steps: the culturing of the cells (upstream) and the purification of the targeted product (downstream). Both processes are highly complex and their performance depends on various parameters. In particular, the efficiency of the upstream depends highly on cell growth and the longevity of the culture, while product quality can be jeopardized in case the culture is not terminated timely. Similarly, downstream processing, whose main step is the chromatographic separation, relies highly on the setup configuration, as well as on the composition of the upstream mixture. Therefore, it is necessary to understand and optimize both processes prior to their integration. In this direction, the design of intelligent computational tools becomes eminent. Such tools can form a solid basis for the: (i) execution of cost-free comparisons of various operating strategies, (ii) design of optimal operation profiles and (iii) development of advanced, intelligent control systems that can maintain the process under optimal operation, rejecting disturbances. In this context, this work focuses on the development of advanced computational tools for the improvement of the performance of: (a) chromatographic separation processes and (b) cell culture systems, following the systematic PAROC framework and software platform (Pistikopoulos et al., 2015). In particular we develop model-based controllers for single- and multi-column chromatographic setups based on the operating principles of an industrially relevant separation process. The presented strategies are immunized against variations in the feed stream and can successfully compensate for time delays caused due to the column residence time. Issues regarding the points of integration in multi-column systems are also discussed. Moreover, we design and test in silico model-based control strategies for a cell culture system, aiming to increase the culture productivity and drive the system towards continuous operation. Challenges and potential solutions for the seamless integration of the examined bioprocess are also investigated at the end of this thesis.Open Acces
- …