12 research outputs found

    Dealing with abnormalities and deviations to enhance resilience in engineering Assets: A critical review from human factors and decision-making perspectives under complex operational contexts

    Get PDF
    With the growing scale of industrial demands, complexities, and uncertainties around asset engineering and operations due to advanced technology utilization, digitalization, sustainability, new operating models, etc., the sensitive role of abnormalities and deviations towards human safety, systems security, reliability and resilience of engineering assets and industrial systems are becoming even more significant for modern industrial sectors as well as societies in general. In these contexts, the abilities of operators to capture and sense-make early signals that emerge from engineering assets and systems need more attention since it enables them to enhance critical situation awareness (SA) during complex operations. This calls for proactive solutions that can integrate core data with operator knowledge using suitable logical approaches, particularly in a period where there is growing recognition that asset data can provide strong support for engineering and operational decisions in demanding contexts. Based on an ongoing research project, this paper sheds light on abnormalities and deviations; two specific attributes that should be better understood. The purpose is to explore how to capitalize them at very early sense-making stages to enhance situation awareness and thus resilience of dynamic and complex engineering assets and systems. Through a critical review of the current state of knowledge, together with industrial observations, this paper studies these core concepts in detail with due attention to the critical need of so-called priory contextual knowledge and hybrid contextual decision solutions. This R&D work explores proactive possibilities to mitigate inherent potentials for unwanted events and incidents to enhance resilience in the era of digital twins and cyber-physical systems, where complex technologies and operational demands generate new conditions for asset performance.publishedVersio

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine

    Artificial Intelligence and Cognitive Computing

    Get PDF
    Artificial intelligence (AI) is a subject garnering increasing attention in both academia and the industry today. The understanding is that AI-enhanced methods and techniques create a variety of opportunities related to improving basic and advanced business functions, including production processes, logistics, financial management and others. As this collection demonstrates, AI-enhanced tools and methods tend to offer more precise results in the fields of engineering, financial accounting, tourism, air-pollution management and many more. The objective of this collection is to bring these topics together to offer the reader a useful primer on how AI-enhanced tools and applications can be of use in today’s world. In the context of the frequently fearful, skeptical and emotion-laden debates on AI and its value added, this volume promotes a positive perspective on AI and its impact on society. AI is a part of a broader ecosystem of sophisticated tools, techniques and technologies, and therefore, it is not immune to developments in that ecosystem. It is thus imperative that inter- and multidisciplinary research on AI and its ecosystem is encouraged. This collection contributes to that

    Safety and Reliability - Safe Societies in a Changing World

    Get PDF
    The contributions cover a wide range of methodologies and application areas for safety and reliability that contribute to safe societies in a changing world. These methodologies and applications include: - foundations of risk and reliability assessment and management - mathematical methods in reliability and safety - risk assessment - risk management - system reliability - uncertainty analysis - digitalization and big data - prognostics and system health management - occupational safety - accident and incident modeling - maintenance modeling and applications - simulation for safety and reliability analysis - dynamic risk and barrier management - organizational factors and safety culture - human factors and human reliability - resilience engineering - structural reliability - natural hazards - security - economic analysis in risk managemen

    Predictive Maintenance of an External Gear Pump using Machine Learning Algorithms

    Get PDF
    The importance of Predictive Maintenance is critical for engineering industries, such as manufacturing, aerospace and energy. Unexpected failures cause unpredictable downtime, which can be disruptive and high costs due to reduced productivity. This forces industries to ensure the reliability of their equip-ment. In order to increase the reliability of equipment, maintenance actions, such as repairs, replacements, equipment updates, and corrective actions are employed. These actions affect the flexibility, quality of operation and manu-facturing time. It is therefore essential to plan maintenance before failure occurs.Traditional maintenance techniques rely on checks conducted routinely based on running hours of the machine. The drawback of this approach is that maintenance is sometimes performed before it is required. Therefore, conducting maintenance based on the actual condition of the equipment is the optimal solu-tion. This requires collecting real-time data on the condition of the equipment, using sensors (to detect events and send information to computer processor).Predictive Maintenance uses these types of techniques or analytics to inform about the current, and future state of the equipment. In the last decade, with the introduction of the Internet of Things (IoT), Machine Learning (ML), cloud computing and Big Data Analytics, manufacturing industry has moved forward towards implementing Predictive Maintenance, resulting in increased uptime and quality control, optimisation of maintenance routes, improved worker safety and greater productivity.The present thesis describes a novel computational strategy of Predictive Maintenance (fault diagnosis and fault prognosis) with ML and Deep Learning applications for an FG304 series external gear pump, also known as a domino pump. In the absence of a comprehensive set of experimental data, synthetic data generation techniques are implemented for Predictive Maintenance by perturbing the frequency content of time series generated using High-Fidelity computational techniques. In addition, various types of feature extraction methods considered to extract most discriminatory informations from the data. For fault diagnosis, three types of ML classification algorithms are employed, namely Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Naive Bayes (NB) algorithms. For prognosis, ML regression algorithms, such as MLP and SVM, are utilised. Although significant work has been reported by previous authors, it remains difficult to optimise the choice of hyper-parameters (important parameters whose value is used to control the learning process) for each specific ML algorithm. For instance, the type of SVM kernel function or the selection of the MLP activation function and the optimum number of hidden layers (and neurons).It is widely understood that the reliability of ML algorithms is strongly depen-dent upon the existence of a sufficiently large quantity of high-quality training data. In the present thesis, due to the unavailability of experimental data, a novel high-fidelity in-silico dataset is generated via a Computational Fluid Dynamic (CFD) model, which has been used for the training of the underlying ML metamodel. In addition, a large number of scenarios are recreated, ranging from healthy to faulty ones (e.g. clogging, radial gap variations, axial gap variations, viscosity variations, speed variations). Furthermore, the high-fidelity dataset is re-enacted by using degradation functions to predict the remaining useful life (fault prognosis) of an external gear pump.The thesis explores and compares the performance of MLP, SVM and NB algo-rithms for fault diagnosis and MLP and SVM for fault prognosis. In order to enable fast training and reliable testing of the MLP algorithm, some predefined network architectures, like 2n neurons per hidden layer, are used to speed up the identification of the precise number of neurons (shown to be useful when the sample data set is sufficiently large). Finally, a series of benchmark tests are presented, enabling to conclude that for fault diagnosis, the use of wavelet features and a MLP algorithm can provide the best accuracy, and the MLP al-gorithm provides the best prediction results for fault prognosis. In addition, benchmark examples are simulated to demonstrate the mesh convergence for the CFD model whereas, quantification analysis and noise influence on training data are performed for ML algorithms

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Análise e reformulação do arcabouço de interconexão do FloripaSat visando mitigação de falhas nas camadas física e de enlace de dados

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2021.Inicialmente, nanossatélites, satélites com massa entre 1 e 10 kg, eram utilizados principalmente em atividades de ensino na área espacial e para validação de tecnologias em órbita. Nos últimos anos, com o crescimento da área, esse tipo de satélite passou a ser utilizado também para aplicações científicas e comerciais. Como consequência, além do custo reduzido, estes sistemas também devem apresentar uma alta taxa de confiabilidade durante a missão espacial. Desta forma, a confiabilidade na transferência das informações de um satélite é imprescindível e o protocolo de comunicação embarcado de um nanossatélite deve ser tão confiável quanto os demais sistemas. Nesta dissertação é discutido o estudo realizado em protocolos de comunicação utilizados nos sistemas embarcados de nanossatélites. Durante o estudo foi avaliada a melhor maneira de realizar a transferência das mensagens entre os subsistemas da plataforma FloripaSat, visando a utilização para sistemas embarcados baseados em FPGA e baseados em microcontroladores. Para isto, foram utilizadas como ponto de partida as informações obtidas a partir da missão FloripaSat-1. Para validação dos resultados, foram realizadas análises matemáticas e análise de um modelo simulado do barramento, que foram comparadas com as informações de um modelo de barramento prático, baseado nos três módulos principais do FloripaSat, o OBDH, o EPS e o TT&C. Como resultado final do estudo, foi definido e implementado o protocolo CAN para utilização na plataforma FloripaSat, visando melhoria nos aspectos de confiabilidade na transferência de dados entre os subsistemas do satélite.Abstract: Initially, nanosatellites, satellites with mass between 1 and 10 kg, were used mainly in educational activities in the space area and for validation of technologies in orbit. In recent years, with the growth of the area, this type of satellite has also been used for scientific and commercial applications. As a result, in addition to the reduced cost, these systems must also have a high reliability during a space mission. Thus, reliability in the information transference from a satellite is essential and the on-board communication protocol of a nanosatellite must be as reliable as other systems. This dissertation discusses the study carried out on communication protocols used in embedded nanosatellite systems. During the study, the best way to carry out the transfer of messages between FloripaSat subsystems was evaluated, aiming at their use for embedded systems based on FPGA and based on microcontrollers. For this, the information obtained from the FloripaSat-1 mission was used as a starting point. To validate the results, mathematical analyzes and analysis of a simulated bus were performed, which were compared with information from a practical bus model, based on the three main FloripaSat modules, OBDH, EPS and TT&C. As a final result of the study, the CAN protocol was defined and implemented for use on the FloripaSat platform, aiming at improving the reliability aspects of data transfer between the satellite subsystems

    Optimisation of vibration monitoring nodes in wireless sensor networks

    Get PDF
    This PhD research focuses on developing a wireless vibration condition monitoring (CM) node which allows an optimal implementation of advanced signal processing algorithms. Obviously, such a node should meet additional yet practical requirements including high robustness and low investments in achieving predictive maintenance. There are a number of wireless protocols which can be utilised to establish a wireless sensor network (WSN). Protocols like WiFi HaLow, Bluetooth low energy (BLE), ZigBee and Thread are more suitable for long-term non-critical CM battery powered nodes as they provide inherent merits like low cost, self-organising network, and low power consumption. WirelessHART and ISA100.11a provide more reliable and robust performance but their solutions are usually more expensive, thus they are more suitable for strict industrial control applications. Distributed computation can utilise the limited bandwidth of wireless network and battery life of sensor nodes more wisely. Hence it is becoming increasingly popular in wireless CM with the fast development of electronics and wireless technologies in recent years. Therefore, distributed computation is the primary focus of this research in order to develop an advanced sensor node for realising wireless networks which allow high-performance CM at minimal network traffic and economic cost. On this basis, a ZigBee-based vibration monitoring node is designed for the evaluation of embedding signal processing algorithms. A state-of-the-art Cortex-M4F processor is employed as the core processor on the wireless sensor node, which has been optimised for implementing complex signal processing algorithms at low power consumption. Meanwhile, an envelope analysis is focused on as the main intelligent technique embedded on the node due to the envelope analysis being the most effective and general method to characterise impulsive and modulating signatures. Such signatures can commonly be found on faulty signals generated by key machinery components, such as bearings, gears, turbines, and valves. Through a preliminary optimisation in implementing envelope analysis based on fast Fourier transform (FFT), an envelope spectrum of 2048 points is successfully achieved on a processor with a memory usage of 32 kB. Experimental results show that the simulated bearing faults can be clearly identified from the calculated envelope spectrum. Meanwhile, the data throughput requirement is reduced by more than 95% in comparison with the raw data transmission. To optimise the performance of the vibration monitoring node, three main techniques have been developed and validated: 1) A new data processing scheme is developed by combining three subsequent processing techniques: down-sampling, data frame overlapping and cascading. On this basis, a frequency resolution of 0.61 Hz in the envelope spectrum is achieved on the same processor. 2) The optimal band-pass filter for envelope analysis is selected by a scheme, in which the complicated fast kurtogram is implemented on the host computer for selecting optimal band-pass filter and real-time envelope analysis on the wireless sensor for extracting bearing fault features. Moreover, a frequency band of 16 kHz is analysed, which allows features to be extracted in a wide frequency band, covering a wide category of industrial applications. 3) Two new analysis methods: short-time RMS and spectral correlation algorithms are proposed for bearing fault diagnosis. They can significantly reduce the CPU usage, being over two times less and consequently much lower power consumptio
    corecore