72 research outputs found

    Artificial neural network RCE

    Get PDF
    Práce se zabývá umělou neuronovou sítí RCE, zvláště popisem topologie, vlastností a algoritmem učení sítě. Práce obsahuje popis vytvořeného programu uTeachRCE pro učení RCE sítě a programu RCEin3D, vytvořeného pro vizualizaci procesu učení RCE sítě ve 3D prostoru. RCE síť je srovnávána s vícevrstvou umělou neuronovou sítí s algoritmem učení backpropagation při praktické aplikaci rozpoznávání písmen. Pro popis písmen byly zvoleny momenty invariantní na otočení, posun a změnu měřítka obrazu.This paper is focused on an artificial neural network RCE, especially describing the topology, properties and learning algorithm of the network. This paper describes program uTeachRCE developed for learning the RCE network and program RCEin3D, which is created to visualize the RCE network in 3D space. The RCE network is compared with a multilayer neural network with a learning algorithm backpropagation in the practical application of recognition letters. For a descriptions of the letters were chosen moments invariant to rotation, translation and scaling image.

    Joint Acoustic Propagation Experiment (JAPE-91) Workshop

    Get PDF
    The Joint Acoustic Propagation Experiment (JAPE), was conducted at the White Sands Missile Range, New Mexico, USA, during the period 11-28 Jul. 1991. JAPE consisted of various short and long range propagation experiments using various acoustic sources including speakers, propane cannons, helicopters, a 155 mm howitzer, and static high explosives. Of primary importance to the performance of theses tests was the extensive characterization of the atmosphere during these tests. This atmospheric characterization included turbulence measurements. A workshop to disseminate the results of JAPE-91 was held in Hampton, VA, on 28 Apr. 1993. This report is a compilation of the presentations made at the workshop along with a list of attendees and the agenda

    Human Resource Management in Emergency Situations

    Get PDF
    The dissertation examines the issues related to the human resource management in emergency situations and introduces the measures helping to solve these issues. The prime aim is to analyse complexly a human resource management, built environment resilience management life cycle and its stages for the purpose of creating an effective Human Resource Management in Emergency Situations Model and Intelligent System. This would help in accelerating resilience in every stage, managing personal stress and reducing disaster-related losses. The dissertation consists of an Introduction, three Chapters, the Conclusions, References, List of Author’s Publications and nine Appendices. The introduction discusses the research problem and the research relevance, outlines the research object, states the research aim and objectives, overviews the research methodology and the original contribution of the research, presents the practical value of the research results, and lists the defended propositions. The introduction concludes with an overview of the author’s publications and conference presentations on the topic of this dissertation. Chapter 1 introduces best practice in the field of disaster and resilience management in the built environment. It also analyses disaster and resilience management life cycle ant its stages, reviews different intelligent decision support systems, and investigates researches on application of physiological parameters and their dependence on stress. The chapter ends with conclusions and the explicit objectives of the dissertation. Chapter 2 of the dissertation introduces the conceptual model of human resource management in emergency situations. To implement multiple criteria analysis of the research object the methods of multiple criteria analysis and mahematics are proposed. They should be integrated with intelligent technologies. In Chapter 3 the model developed by the author and the methods of multiple criteria analysis are adopted by developing the Intelligent Decision Support System for a Human Resource Management in Emergency Situations consisting of four subsystems: Physiological Advisory Subsystem to Analyse a User’s Post-Disaster Stress Management; Text Analytics Subsystem; Recommender Thermometer for Measuring the Preparedness for Resilience and Subsystem of Integrated Virtual and Intelligent Technologies. The main statements of the thesis were published in eleven scientific articles: two in journals listed in the Thomson Reuters ISI Web of Science, one in a peer-reviewed scientific journal, four in peer-reviewed conference proceedings referenced in the Thomson Reuters ISI database, and three in peer-reviewed conference proceedings in Lithuania. Five presentations were given on the topic of the dissertation at conferences in Lithuania and other countries

    Temporal and spectral pattern recognition for detection and combined network and array waveform coherence analysis for location of seismic events

    Get PDF
    The reliable automatic detection, location and classification of seismic events still poses great challenges if only few sensors record an event and/or the signal-to-noise ratio is very low. This study first examines, compares and evaluates the most widely used algorithms for automatic processing on a diverse set of seismic datasets (e.g. from induced seismicity and nuclear-test-ban verification experiments). A synthesis of state-of-the-art algorithms is given. Several single station event detection and phase picking algorithms are tested followed by a comparison of single station waveform cross-correlation and spectral pattern recognition. Coincidence analysis is investigated afterwards to demonstrate up to which level false alarms can be ruled out in sensor networks of multiple stations. It is then shown how the use of seismic (mini) arrays in diverse configurations can improve these results considerably through the use of waveform coherence. In a second step, two concepts are presented which combine the previously analysed algorithmic building blocks in a new way. The first concept is seismic event signal clustering by unsupervised learning which allows event identification with only one sensor. The study serves as a base level investigation to explore the limits of elementary seismic monitoring with only one single vertical-component seismic sensor and shows the level of information which can be extracted from a single station. It is investigated how single station event signal similarity clusters relate to geographic hypocenter regions and common source processes. Typical applications arise in local seismic networks where reliable ground truth by a dense temporal network precedes or follows a sparse (permanent) installation. The test dataset comprises a three-month subset from a field campaign to map subduction below northern Chile, project for the seismological investigation of the western cordillera (PISCO). Due to favourable ground noise conditions in the Atacama desert, the dataset contains an abundance of shallow and deep earthquakes, and many quarry explosions. Often event signatures overlap, posing a challenge to any signal processing scheme. Pattern recognition must work on reduced seismograms to restrict parameter space. Continuous parameter extraction based on noise-adapted spectrograms was chosen instead of discrete representation by, e.g. amplitudes, onset times, or spectral ratios to ensure consideration of potentially hidden features. Visualization of the derived feature vectors for human inspection and template matching algorithms was hereby possible. Because event classes shall comprise earthquake regions regardless of magnitude, signal clustering based on amplitudes is prevented by proper normalization of feature vectors. Principal component analysis (PCA) is applied to further reduce the number of features used to train a self-organizing map (SOM). The SOM arranges prototypes of each event class in a 2D map topologically. Overcoming the restrictions of this black-box approach, the arranged prototypes can be transformed back to spectrograms to allow for visualization and interpretation of event classes. The final step relates prototypes to ground-truth information, confirming the potential of automated, coarse-grain hypocenter clustering based on single station seismograms. The approach was tested by a two-fold cross-validation whereby multiple sets of feature vectors from half the events are compared by a one-nearest neighbour classifier in combination with an euclidean distance measure resulting in an overall correct geographic separation rate of 95.1% for coarse clusters and 80.5% for finer clusters (86.3% for a more central station). The second concept shows a new method to combine seismic networks of single stations and arrays for automatic seismic event location. After exploring capabilities of single station algorithms in the section before, this section explores capabilities of algorithms for small local seismic networks. Especially traffic light systems for induced seismicity monitoring rely on the real-time automated location of weak events. These events suffer from low signal-to-noise ratios and noise spikes due to the industrial setting. Conventional location methods rely on independent picking of first arrivals from seismic wave onsets at recordings of single stations. Picking is done separately and without feedback from the actual location algorithm. With low signal-to-noise ratios and local events, the association of onsets gets error prone, especially for S-phase onsets which are overlaid by coda from previous phases. If the recording network is small or only few phases can be associated, single wrong associations can lead to large errors in hypocenter locations and magnitude. Event location by source scanning which was established in the last two decades can provide more robust results. Source scanning uses maxima from a travel time corrected stack of a characteristic function of the full waveforms on a predefined location grid. This study investigates how source-scanning can be extended and improved by integrating information from seismic arrays, i.e. waveform stacking and Fisher ratio. These array methods rely on the coherency of the raw filtered waveforms while traditional source scanning uses a characteristic function to obtain coherency from otherwise incoherent waveforms between distant stations. The short term average to long term average ratio (STA/LTA) serves as the characteristic function and single station vertical-component traces for P-phases and radial and transverse components for S-phases are used. For array stations, the STA/LTA of the stacked vertical seismogram which is furthermore weighted by the STA/LTA of the Fisher ratio, dependent on back azimuth and slowness, is utilized for P-phases. In the chosen example, the extension by array-processing techniques can reduce the mean error in comparison to manually determined hypocenters by up to a factor of 2.9, resolve ambiguities and further restrain the location

    Role of independent component analysis in intelligent ECG signal processing

    Get PDF
    The Electrocardiogram (ECG) reflects the activities and the attributes of the human heart and reveals very important hidden information in its structure. The information is extracted by means of ECG signal analysis to gain insights that are very crucial in explaining and identifying various pathological conditions. The feature extraction process can be accomplished directly by an expert through, visual inspection of ECGs printed on paper or displayed on a screen. However, the complexity and the time taken for the ECG signals to be visually inspected and manually analysed means that it‟s a very tedious task thus yielding limited descriptions. In addition, a manual ECG analysis is always prone to errors: human oversights. Moreover ECG signal processing has become a prevalent and effective tool for research and clinical practices. A typical computer based ECG analysis system includes a signal preprocessing, beats detection and feature extraction stages, followed by classification.Automatic identification of arrhythmias from the ECG is one important biomedical application of pattern recognition. This thesis focuses on ECG signal processing using Independent Component Analysis (ICA), which has received increasing attention as a signal conditioning and feature extraction technique for biomedical application. Long term ECG monitoring is often required to reliably identify the arrhythmia. Motion induced artefacts are particularly common in ambulatory and Holter recordings, which are difficult to remove with conventional filters due to their similarity to the shape of ectopic xiiibeats. Feature selection has always been an important step towards more accurate, reliable and speedy pattern recognition. Better feature spaces are also sought after in ECG pattern recognition applications. Two new algorithms are proposed, developed and validated in this thesis, one for removing non-trivial noises in ECGs using the ICA and the other deploys the ICA extracted features to improve recognition of arrhythmias. Firstly, independent component analysis has been studiedand found effective in this PhD project to separate out motion induced artefacts in ECGs, the independent component corresponding to noise is then removed from the ECG according to kurtosis and correlation measurement.The second algorithm has been developed for ECG feature extraction, in which the independent component analysis has been used to obtain a set of features, or basis functions of the ECG signals generated hypothetically by different parts of the heart during the normal and arrhythmic cardiac cycle. ECGs are then classified based on the basis functions along with other time domain features. The selection of the appropriate feature set for classifier has been found important for better performance and quicker response. Artificial neural networks based pattern recognition engines are used to perform final classification to measure the performance of ICA extracted features and effectiveness of the ICA based artefacts reduction algorithm.The motion artefacts are effectively removed from the ECG signal which is shown by beat detection on noisy and cleaned ECG signals after ICA processing. Using the ICA extracted feature sets classification of ECG arrhythmia into eight classes with fewer independent components and very high classification accuracy is achieved

    Gömülü sistem tabanlı elektrokardiyogram holter cihazının tasarlanması ve yapay sinir ağıgenetik algoritma hibrit modeli ile aritmi tespiti

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Ülkemizde ve dünyada, insan nüfusunun giderek yaşlanması ve kalp rahatsızlıkları görülme oranının artışı, bu hayati önem taşıyan organımızın faaliyetlerini sürekli olarak kontrol altında tutma, tedavi süresi boyunca ve tedavi öncesinde tüm etkileri gözlemleme gibi ihtiyaçları doğurmaktadır. Çalışmanın temel amacı, gömülü sistem tabanlı taşınabilir bir Elektrokardiyografi (EKG) Holter cihazını gerçekleştirmek, sinyal işleme metotları uygulayarak alınan sonuçlara adaptif çözümler üretmek ve yapay sinir ağı, genetik algoritma hibrit modeli oluşturularak, atriyal fibrilasyon kalp aritmisinin tespit edilmesidir. Yapmış olduğumuz çalışma üç aşamadan meydana gelmektedir: İlk aşamada, bireyin EKG sinyalleri elektrotlar yardımı ile alınıp, tasarlamış olduğumuz biyoenstrümantasyon yükseltici devresi ile 251 kat kuvvetlendirilmiştir. EKG sinyallerine etki eden çevresel etmenler ve şebeke gürültüsünün olumsuz etkisini bastırmak için 50 Hz'lik çentik filtre ve 0,01-132 Hz frekans aralığında bant geçiren filtre uygulanmıştır. Filtrelenen EKG işareti analog dijital çeviriciler (ADC) kullanılarak sayısallaştırılıp SPI haberleşme protokolü kullanılarak gömülü sistem kartı ile bağlantısı kurulmuştur. İlk ara yüz yazılımı çok sayıda kütüphane desteği içerdiği için Python dilinde geliştirilmiş. Fakat Python dili gerçek zamanlı çalışırken, sinyal işleme algoritmalarını ve sinyali ara yüzde çizdirme işlemini gerçekleştirirken hızı yeterli olmadığı için C++ dilinde programlama yapılmıştır. Algoritmalar Raspberry Pi, Odroid ve Beaglebone black gömülü sistem kartlarında çalıştırılıp performans analizleri karşılaştırılıp sonuçları incelenmiştir. Beaglebone black gömülü sistem kartının örnekleme hızı 40 Hz'i geçemediği için EKG Holter cihazı için uygun değildir. Raspberry Pi gömülü sistem kartının örnekleme hızı 80 Hz civarındadır, nabız ve basit sinyal işleme algoritmalarını çalıştırmak için kullanılabileceği tespit edilmiştir. Odroid gömülü sistem kartında örnekleme hızı 260 Hz'lere kadar çıkabildiği için EKG analizleri, sinyal işleme algoritmaları ve yapay zekâ uygulamaları için kullanılabilir en uygun mikrobilgisayar olduğu belirlenmiştir. İkinci aşamada, sayısallaştırılmış olan EKG sinyalini analiz etmek için Fourier dönüşümü uygulanmıştır ve frekans analizi yapılmıştır. R tepesini güçlendirmek ve nabız hesaplamasında adaptif eşik değeri ayarlamak için sinyal işleme metotları uygulanarak sonuçları karşılaştırılmıştır. En uygun R bulma algoritması olarak sinyal enerjisi yöntemi olarak belirlenmiştir. EKG sinyaline dalgacık dönüşümü uygulanarak farklı Daubechies (db) dalgacık aileleri kullanılarak özellik çıkartımı gerçekleştirilmiştir ve yapay zekânın sonuçları tablo oluşturularak karşılaştırılmıştır. Üçüncü aşamada, istatistiksel özellik çıkarımı yapılarak yapay sinir ağının giriş katmanı oluşturularak yapay sinir ağı eğitilmiştir ve yapay sinir ağının ağırlık katsayıları genetik algoritma kullanılarak hata oranı azaltılmıştır.The main goal of the study is to realize the embedded system based portable ECG Holter Device and by applying signal processing methods to provide adaptative solutions to the results and by the formation of artificial network of neurons, to determine the atrial fibrillation heart arythmia. The study herein, consists of threeparts. In the first part, the individual's Electro Cardiography (ECG) signals being obtained by the use of the electrodes, the bio-instrumentation booster circuit, has been reinforced 251 fold. The surrounding elements in effect on ECG signals and the ambient ones and in order to repress the network's noise, 50 Hz notch filter and 0,01132 Hz frequency space, band permeating filter has been put to use. The filtered ECG signal, for the analog- digital converters (ADC) through this, a connection between the system card has been set up. The algorithms Raspberry Pi Odroid and Beagle bone black embedded system cards have hosted these algorithms, the comparison of performance analyses and the results have been examined. The embedded Beagle bone black system card's reading can not exceed the 40 Hz frequency, for that matter it is not convenient for the ECG Holter device. Raspberry Pi embedded card's sampling rate is around 80 Hz, the pulse and simple signal processing algorithms can be put to good use. In the Odroid embedded system card's sampling rate can raise to 260 Hz. For that matter, the ECG analyses and the signal processing algorithms and for he artificial intelligence applications convenient stature, the most possible tool has been found out to be he micro-computer. During the sophomore stage, the digitalized ECG signals have been put to analysis by the applications of the Fourier Transform and the frequency analysis has been realised. To reinforce the R Crest and in the calculation of pulse rate, the threshold value have been applied to compare the results by the use of processing methods. The most convenient R Finding Algorithm has been found out to be the signal energy method. To the ECG signal, a wavelet transform has been applied and the feature documentation has been realized by the aid of discrete Daubechies (db) and the artificial intelligence results have been compared and contrasted on tabular data. During the third stage by statistical feature determination and theformation of the artificial network's entery layer, the neural network has been rectified and the neural network's weighted coefficients have been used by the genetic algorithm and thus the error possibilities have been decreased

    An investigation on automatic systems for fault diagnosis in chemical processes

    Get PDF
    Plant safety is the most important concern of chemical industries. Process faults can cause economic loses as well as human and environmental damages. Most of the operational faults are normally considered in the process design phase by applying methodologies such as Hazard and Operability Analysis (HAZOP). However, it should be expected that failures may occur in an operating plant. For this reason, it is of paramount importance that plant operators can promptly detect and diagnose such faults in order to take the appropriate corrective actions. In addition, preventive maintenance needs to be considered in order to increase plant safety. Fault diagnosis has been faced with both analytic and data-based models and using several techniques and algorithms. However, there is not yet a general fault diagnosis framework that joins detection and diagnosis of faults, either registered or non-registered in records. Even more, less efforts have been focused to automate and implement the reported approaches in real practice. According to this background, this thesis proposes a general framework for data-driven Fault Detection and Diagnosis (FDD), applicable and susceptible to be automated in any industrial scenario in order to hold the plant safety. Thus, the main requirement for constructing this system is the existence of historical process data. In this sense, promising methods imported from the Machine Learning field are introduced as fault diagnosis methods. The learning algorithms, used as diagnosis methods, have proved to be capable to diagnose not only the modeled faults, but also novel faults. Furthermore, Risk-Based Maintenance (RBM) techniques, widely used in petrochemical industry, are proposed to be applied as part of the preventive maintenance in all industry sectors. The proposed FDD system together with an appropriate preventive maintenance program would represent a potential plant safety program to be implemented. Thus, chapter one presents a general introduction to the thesis topic, as well as the motivation and scope. Then, chapter two reviews the state of the art of the related fields. Fault detection and diagnosis methods found in literature are reviewed. In this sense a taxonomy that joins both Artificial Intelligence (AI) and Process Systems Engineering (PSE) classifications is proposed. The fault diagnosis assessment with performance indices is also reviewed. Moreover, it is exposed the state of the art corresponding to Risk Analysis (RA) as a tool for taking corrective actions to faults and the Maintenance Management for the preventive actions. Finally, the benchmark case studies against which FDD research is commonly validated are examined in this chapter. The second part of the thesis, integrated by chapters three to six, addresses the methods applied during the research work. Chapter three deals with the data pre-processing, chapter four with the feature processing stage and chapter five with the diagnosis algorithms. On the other hand, chapter six introduces the Risk-Based Maintenance techniques for addressing the plant preventive maintenance. The third part includes chapter seven, which constitutes the core of the thesis. In this chapter the proposed general FD system is outlined, divided in three steps: diagnosis model construction, model validation and on-line application. This scheme includes a fault detection module and an Anomaly Detection (AD) methodology for the detection of novel faults. Furthermore, several approaches are derived from this general scheme for continuous and batch processes. The fourth part of the thesis presents the validation of the approaches. Specifically, chapter eight presents the validation of the proposed approaches in continuous processes and chapter nine the validation of batch process approaches. Chapter ten raises the AD methodology in real scaled batch processes. First, the methodology is applied to a lab heat exchanger and then it is applied to a Photo-Fenton pilot plant, which corroborates its potential and success in real practice. Finally, the fifth part, including chapter eleven, is dedicated to stress the final conclusions and the main contributions of the thesis. Also, the scientific production achieved during the research period is listed and prospects on further work are envisaged.La seguridad de planta es el problema más inquietante para las industrias químicas. Un fallo en planta puede causar pérdidas económicas y daños humanos y al medio ambiente. La mayoría de los fallos operacionales son previstos en la etapa de diseño de un proceso mediante la aplicación de técnicas de Análisis de Riesgos y de Operabilidad (HAZOP). Sin embargo, existe la probabilidad de que pueda originarse un fallo en una planta en operación. Por esta razón, es de suma importancia que una planta pueda detectar y diagnosticar fallos en el proceso y tomar las medidas correctoras adecuadas para mitigar los efectos del fallo y evitar lamentables consecuencias. Es entonces también importante el mantenimiento preventivo para aumentar la seguridad y prevenir la ocurrencia de fallos. La diagnosis de fallos ha sido abordada tanto con modelos analíticos como con modelos basados en datos y usando varios tipos de técnicas y algoritmos. Sin embargo, hasta ahora no existe la propuesta de un sistema general de seguridad en planta que combine detección y diagnosis de fallos ya sea registrados o no registrados anteriormente. Menos aún se han reportado metodologías que puedan ser automatizadas e implementadas en la práctica real. Con la finalidad de abordar el problema de la seguridad en plantas químicas, esta tesis propone un sistema general para la detección y diagnosis de fallos capaz de implementarse de forma automatizada en cualquier industria. El principal requerimiento para la construcción de este sistema es la existencia de datos históricos de planta sin previo filtrado. En este sentido, diferentes métodos basados en datos son aplicados como métodos de diagnosis de fallos, principalmente aquellos importados del campo de “Aprendizaje Automático”. Estas técnicas de aprendizaje han resultado ser capaces de detectar y diagnosticar no sólo los fallos modelados o “aprendidos”, sino también nuevos fallos no incluidos en los modelos de diagnosis. Aunado a esto, algunas técnicas de mantenimiento basadas en riesgo (RBM) que son ampliamente usadas en la industria petroquímica, son también propuestas para su aplicación en el resto de sectores industriales como parte del mantenimiento preventivo. En conclusión, se propone implementar en un futuro no lejano un programa general de seguridad de planta que incluya el sistema de detección y diagnosis de fallos propuesto junto con un adecuado programa de mantenimiento preventivo. Desglosando el contenido de la tesis, el capítulo uno presenta una introducción general al tema de esta tesis, así como también la motivación generada para su desarrollo y el alcance delimitado. El capítulo dos expone el estado del arte de las áreas relacionadas al tema de tesis. De esta forma, los métodos de detección y diagnosis de fallos encontrados en la literatura son examinados en este capítulo. Asimismo, se propone una taxonomía de los métodos de diagnosis que unifica las clasificaciones propuestas en el área de Inteligencia Artificial y de Ingeniería de procesos. En consecuencia, se examina también la evaluación del performance de los métodos de diagnosis en la literatura. Además, en este capítulo se revisa y reporta el estado del arte correspondiente al “Análisis de Riesgos” y a la “Gestión del Mantenimiento” como técnicas complementarias para la toma de medidas correctoras y preventivas. Por último se abordan los casos de estudio considerados como puntos de referencia en el campo de investigación para la aplicación del sistema propuesto. La tercera parte incluye el capítulo siete, el cual constituye el corazón de la tesis. En este capítulo se presenta el esquema o sistema general de diagnosis de fallos propuesto. El sistema es dividido en tres partes: construcción de los modelos de diagnosis, validación de los modelos y aplicación on-line. Además incluye un modulo de detección de fallos previo a la diagnosis y una metodología de detección de anomalías para la detección de nuevos fallos. Por último, de este sistema se desglosan varias metodologías para procesos continuos y por lote. La cuarta parte de esta tesis presenta la validación de las metodologías propuestas. Específicamente, el capítulo ocho presenta la validación de las metodologías propuestas para su aplicación en procesos continuos y el capítulo nueve presenta la validación de las metodologías correspondientes a los procesos por lote. El capítulo diez valida la metodología de detección de anomalías en procesos por lote reales. Primero es aplicada a un intercambiador de calor escala laboratorio y después su aplicación es escalada a un proceso Foto-Fenton de planta piloto, lo cual corrobora el potencial y éxito de la metodología en la práctica real. Finalmente, la quinta parte de esta tesis, compuesta por el capítulo once, es dedicada a presentar y reafirmar las conclusiones finales y las principales contribuciones de la tesis. Además, se plantean las líneas de investigación futuras y se lista el trabajo desarrollado y presentado durante el periodo de investigación
    corecore