876 research outputs found

    The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments

    Get PDF
    In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident. In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion. This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture. Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data. As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    A Changing Landscape:On Safety & Open Source in Automated and Connected Driving

    Get PDF

    A novel dynamic maximum demand reduction controller of battery energy storage system for educational buildings in Malaysia

    Get PDF
    Maximum Demand (MD) management is essential to help businesses and electricity companies saves on electricity bills and operation cost. Among different MD reduction techniques, demand response with battery energy storage systems (BESS) provides the most flexible peak reduction solution for various markets. One of the major challenges is the optimization of the demand threshold that controls the charging and discharging powers of BESS. To increase its tolerance to day-ahead prediction errors, state-of-art controllers utilize complex prediction models and rigid parameters that are determined from long-term historical data. However, long-term historical data may be unavailable at implementation, and rigid parameters cause them unable to adapt to evolving load patterns. Hence, this research work proposes a novel incremental DB-SOINN-R prediction model and a novel dynamic two-stage MD reduction controller. The incremental learning capability of the novel DB-SOINN-R allows the model to be deployed as soon as possible and improves its prediction accuracy as time progresses. The proposed DB-SOINN-R is compared with five models: feedforward neural network, deep neural network with long-short-term memory, support vector regression, ESOINN, and k-nearest neighbour (kNN) regression. They are tested on day-ahead and one-hour-ahead load predictions using two different datasets. The proposed DB-SOINN-R has the highest prediction accuracy among all models with incremental learning in both datasets. The novel dynamic two-stage maximum demand reduction controller of BESS incorporates one-hour-ahead load profiles to refine the threshold found based on day-ahead load profiles for preventing peak reduction failure, if necessary, with no rigid parameters required. Compared to the conventional fixed threshold, single-stage, and fuzzy controllers, the proposed two-stage controller achieves up to 6.82% and 306.23% higher in average maximum demand reduction and total maximum demand charge savings, respectively, on two different datasets. The proposed controller also achieves a 0% peak demand reduction failure rate in both datasets. The real-world performance of the proposed two-stage MD reduction controller that includes the proposed DB-SOINN-R models is validated in a scaled-down experiment setup. Results show negligible differences of 0.5% in daily PDRP and MAPE between experimental and simulation results. Therefore, it fulfilled the aim of this research work, which is to develop a controller that is easy to implement, requires minimal historical data to begin operation and has a reliable MD reduction performance

    Ensemble Machine Learning Model Generalizability and its Application to Indirect Tool Condition Monitoring

    Get PDF
    A practical, accurate, robust, and generalizable system for monitoring tool condition during a machining process would enable advancements in manufacturing process automation, cost reduction, and efficiency improvement. Previously proposed systems using various individual machine learning (ML) models and other analysis techniques have struggled with low generalizability to new machining and environmental conditions, as well as a common reliance on expensive or intrusive sensory equipment which hinders their industry adoption. While ensemble ML techniques offer significant advantages over individual models in terms of performance, overfitting reduction, and generalizability improvement, they have only begun to see limited applications within the field of tool condition monitoring (TCM). To address the research gaps which currently surround TCM system generalizability and optimal ensemble model configuration for this application, nine ML model types, including five heterogeneous and homogeneous ensemble models, are employed for tool wear classification. Sound, spindle power, and axial load signals are utilized through the sensor fusion of practical external and internal machine sensors. This original experimental process data is collected through tool wear experiments using a variety of machining conditions. Four feature selection methods and multiple tool wear classification resolution values are compared for this application, and the performance of the ML models is compared across metrics including k-fold cross validation and leave-one-group-out cross validation. The generalizability of the models to data from unseen experiments and machining conditions is evaluated, and a method of improving the generalizability levels using noisy training data is examined. T-tests are used to measure the significance of model performance differences. The extra-trees ensemble ML method, which had never before been applied to signal-based TCM, shows the best performance of the nine models.M.S

    2023- The Twenty-seventh Annual Symposium of Student Scholars

    Get PDF
    The full program book from the Twenty-seventh Annual Symposium of Student Scholars, held on April 18-21, 2023. Includes abstracts from the presentations and posters.https://digitalcommons.kennesaw.edu/sssprograms/1027/thumbnail.jp

    Security Technologies and Methods for Advanced Cyber Threat Intelligence, Detection and Mitigation

    Get PDF
    The rapid growth of the Internet interconnectivity and complexity of communication systems has led us to a significant growth of cyberattacks globally often with severe and disastrous consequences. The swift development of more innovative and effective (cyber)security solutions and approaches are vital which can detect, mitigate and prevent from these serious consequences. Cybersecurity is gaining momentum and is scaling up in very many areas. This book builds on the experience of the Cyber-Trust EU project’s methods, use cases, technology development, testing and validation and extends into a broader science, lead IT industry market and applied research with practical cases. It offers new perspectives on advanced (cyber) security innovation (eco) systems covering key different perspectives. The book provides insights on new security technologies and methods for advanced cyber threat intelligence, detection and mitigation. We cover topics such as cyber-security and AI, cyber-threat intelligence, digital forensics, moving target defense, intrusion detection systems, post-quantum security, privacy and data protection, security visualization, smart contracts security, software security, blockchain, security architectures, system and data integrity, trust management systems, distributed systems security, dynamic risk management, privacy and ethics

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    Risk Analysis for Smart Cities Urban Planners: Safety and Security in Public Spaces

    Get PDF
    Christopher Alexander in his famous writings "The Timeless Way of Building" and "A pattern language" defined a formal language for the description of a city. Alexander developed a generative grammar able to formally describe complex and articulated concepts of architecture and urban planning to define a common language that would facilitate both the participation of ordinary citizens and the collaboration between professionals in architectural and urban planning. In this research, a similar approach has been applied to let two domains communicate although they are very far in terms of lexicon, methodologies and objectives. These domains are urban planning, urban design and architecture, seen as the first domain both in terms of time and in terms of completeness of vision, and the one relating to the world of engineering, made by innumerable disciplines. In practice, there is a domain that defines the requirements and the overall vision (the first) and a domain (the second) which implements them with real infrastructures and systems. To put these two worlds seamlessly into communication, allowing the concepts of the first world to be translated into those of the second, Christopher Alexander’s idea has been followed by defining a common language. By applying Essence, the software engineering formal descriptive theory, using its customization rules, to the concept of a Smart City, a common language to completely trace the requirements at all levels has been defined. Since the focus was on risk analysis for safety and security in public spaces, existing risk models have been considered, evidencing a further gap also within the engineering world itself. Depending on the area being considered, risk management models have different and siloed approaches which ignore the interactions of one type of risk with the others. To allow effective communication between the two domains and within the engineering domain, a unified risk analysis framework has been developed. Then a framework (an ontology) capable of describing all the elements of a Smart City has been developed and combined with the common language to trace the requirements. Following the philosophy of the Vienna Circle, a creative process called Aufbau has then been defined to allow the generation of a detailed description of the Smart City, at any level, using the common language and the ontology above defined. Then, the risk analysis methodology has been applied to the city model produced by Aufbau. The research developed tools to apply such results to the entire life cycle of the Smart City. With these tools, it is possible to understand how much a given architectural, urban planning or urban design requirement is operational at a given moment. In this way, the narration can accurately describe how much the initial requirements set by architects, planners and urban designers and, above all, the values required by stakeholders, are satisfied, at any time. The impact of this research on urban planning is the ability to create a single model between the two worlds, leaving everyone free to express creativity and expertise in the appropriate forms but, at the same time, allowing both to fill the communication gap existing today. This new way of planning requires adequate IT tools and takes the form, from the engineering side, of harmonization of techniques already in use and greater clarity of objectives. On the side of architecture, urban planning and urban design, it is instead a powerful decision support tool, both in the planning and operational phases. This decision support tool for Urban Planning, based on the research results, is the starting point for the development of a meta-heuristic process using an evolutionary approach. Consequently, risk management, from Architecture/Urban Planning/Urban Design up to Engineering, in any phase of the Smart City’s life cycle, is seen as an “organism” that evolves.Christopher Alexander nei suoi famosi scritti "The Timeless Way of Building" e "A pattern language" ha definito un linguaggio formale per la descrizione di una città, sviluppando una grammatica in grado di descrivere formalmente concetti complessi e articolati di architettura e urbanistica, definendo un linguaggio comune per facilitare la partecipazione dei comuni cittadini e la collaborazione tra professionisti. In questa ricerca, un approccio simile è stato applicato per far dialogare due domini sebbene siano molto distanti in termini di lessico, metodologie e obiettivi. Essi sono l'urbanistica, l'urban design e l'architettura, visti come primo dominio sia in termini di tempo che di completezza di visione, e quello del mondo dell'ingegneria, con numerose discipline. In pratica, esiste un dominio che definisce i requisiti e la visione d'insieme (il primo) e un dominio (il secondo) che li implementa con infrastrutture e sistemi reali. Per metterli in perfetta comunicazione, permettendo di tradurre i concetti del primo in quelli del secondo, si è seguita l'idea di Alexander definendo un linguaggio. Applicando Essence, la teoria descrittiva formale dell'ingegneria del software al concetto di Smart City, è stato definito un linguaggio comune per tracciarne i requisiti a tutti i livelli. Essendo il focus l'analisi dei rischi per la sicurezza negli spazi pubblici, sono stati considerati i modelli di rischio esistenti, evidenziando un'ulteriore lacuna anche all'interno del mondo dell'ingegneria stessa. A seconda dell'area considerata, i modelli di gestione del rischio hanno approcci diversi e isolati che ignorano le interazioni di un tipo di rischio con gli altri. Per consentire una comunicazione efficace tra i due domini e all'interno del dominio dell'ingegneria, è stato sviluppato un quadro di analisi del rischio unificato. Quindi è stato sviluppato un framework (un'ontologia) in grado di descrivere tutti gli elementi di una Smart City e combinato con il linguaggio comune per tracciarne i requisiti. Seguendo la filosofia del Circolo di Vienna, è stato poi definito un processo creativo chiamato Aufbau per consentire la generazione di una descrizione dettagliata della Smart City, a qualsiasi livello, utilizzando il linguaggio comune e l'ontologia sopra definita. Infine, la metodologia dell'analisi del rischio è stata applicata al modello di città prodotto da Aufbau. La ricerca ha sviluppato strumenti per applicare tali risultati all'intero ciclo di vita della Smart City. Con questi strumenti è possibile capire quanto una data esigenza architettonica, urbanistica o urbanistica sia operativa in un dato momento. In questo modo, la narrazione può descrivere con precisione quanto i requisiti iniziali posti da architetti, pianificatori e urbanisti e, soprattutto, i valori richiesti dagli stakeholder, siano soddisfatti, in ogni momento. L'impatto di questa ricerca sull'urbanistica è la capacità di creare un modello unico tra i due mondi, lasciando ognuno libero di esprimere creatività e competenza nelle forme appropriate ma, allo stesso tempo, permettendo ad entrambi di colmare il gap comunicativo oggi esistente. Questo nuovo modo di progettare richiede strumenti informatici adeguati e si concretizza, dal lato ingegneristico, in un'armonizzazione delle tecniche già in uso e in una maggiore chiarezza degli obiettivi. Sul versante dell'architettura, dell'urbanistica e del disegno urbano, è invece un potente strumento di supporto alle decisioni, sia in fase progettuale che operativa. Questo strumento di supporto alle decisioni per la pianificazione urbana, basato sui risultati della ricerca, è il punto di partenza per lo sviluppo di un processo meta-euristico utilizzando un approccio evolutivo

    An Internet of Things (IoT) based wide-area Wireless Sensor Network (WSN) platform with mobility support.

    Get PDF
    Wide-area remote monitoring applications use cellular networks or satellite links to transfer sensor data to the central storage. Remote monitoring applications uses Wireless Sensor Networks (WSNs) to accommodate more Sensor Nodes (SNs) and for better management. Internet of Things (IoT) network connects the WSN with the data storage and other application specific services using the existing internet infrastructure. Both cellular networks, such as the Narrow-Band IoT (NB-IoT), and satellite links will not be suitable for point-to-point connections of the SNs due to their lack of coverage, high cost, and energy requirement. Low Power Wireless Area Network (LPWAN) is used to interconnect all the SNs and accumulate the data to a single point, called Gateway, before sending it to the IoT network. WSN implements clustering of the SNs to increase the network coverage and utilizes multiple wireless links between the repeater nodes (called hops) to reach the gateway at a longer distance. Clustered WSN can cover up to a few km using the LPWAN technologies such as Zigbee using multiple hops. Each Zigbee link can be from 200 m to 500 m long. Other LPWAN technologies, such as LoRa, can facilitate an extended range from 1km to 15km. However, the LoRa will not be suitable for the clustered WSN due to its long Time on Air (TOA) which will introduce data transmission delay and become severe with the increase of hop count. Besides, a sensor node will need to increase the antenna height to achieve the long-range benefit of Lora using a single link (hop) instead of using multiple hops to cover the same range. With the increased WSN coverage area, remote monitoring applications such as smart farming may require mobile sensor nodes. This research focuses on the challenges to overcome LoRa’s limitations (long TOA and antenna height) and accommodation of mobility in a high-density and wide-area WSN for future remote monitoring applications. Hence, this research proposes lightweight communication protocols and networking algorithms using LoRa to achieve mobility, energy efficiency and wider coverage of up to a few hundred km for the WSN. This thesis is divided into four parts. It presents two data transmission protocols for LoRa to achieve a higher data rate and wider network coverage, one networking algorithm for wide-area WSN and a channel synchronization algorithm to improve the data rate of LoRa links. Part one presents a lightweight data transmission protocol for LoRa using a mobile data accumulator (called data sink) to increase the monitoring coverage area and data transmission energy efficiency. The proposed Lightweight Dynamic Auto Reconfigurable Protocol (LDAP) utilizes direct or single hop to transmit data from the SNs using one of them as the repeater node. Wide-area remote monitoring applications such as Water Quality Monitoring (WQM) can acquire data from geographically distributed water resources using LDAP, and a mobile Data Sink (DS) mounted on an Unmanned Aerial Vehicle (UAV). The proposed LDAP can acquire data from a minimum of 147 SNs covering 128 km in one direction reducing the DS requirement down to 5% comparing other WSNs using Zigbee for the same coverage area with static DS. Applications like smart farming and environmental monitoring may require mobile sensor nodes (SN) and data sinks (DS). The WSNs for these applications will require real-time network management algorithms and routing protocols for the dynamic WSN with mobility that is not feasible using static WSN technologies. This part proposes a lightweight clustering algorithm for the dynamic WSN (with mobility) utilizing the proposed LDAP to form clusters in real-time during the data accumulation by the mobile DS. The proposed Lightweight Dynamic Clustering Algorithm (LDCA) can form real-time clusters consisting of mobile or stationary SNs using mobile DS or static GW. WSN using LoRa and LDCA increases network capacity and coverage area reducing the required number of DS. It also reduces clustering energy to 33% and shows clustering efficiency of up to 98% for single-hop clustering covering 100 SNs. LoRa is not suitable for a clustered WSN with multiple hops due to its long TOA, depending on the LoRa link configurations (bandwidth and spreading factor). This research proposes a channel synchronization algorithm to improve the data rate of the LoRa link by combining multiple LoRa radio channels in a single logical channel. This increased data rate will enhance the capacity of the clusters in the WSN supporting faster clustering with mobile sensor nodes and data sink. Along with the LDCA, the proposed Lightweight Synchronization Algorithm for Quasi-orthogonal LoRa channels (LSAQ) facilitating multi-hop data transfer increases WSN capacity and coverage area. This research investigates quasi-orthogonality features of LoRa in terms of radio channel frequency, spreading factor (SF) and bandwidth. It derived mathematical models to obtain the optimal LoRa parameters for parallel data transmission using multiple SFs and developed a synchronization algorithm for LSAQ. The proposed LSAQ achieves up to a 46% improvement in network capacity and 58% in data rate compared with the WSN using the traditional LoRa Medium Access Control (MAC) layer protocols. Besides the high-density clustered WSN, remote monitoring applications like plant phenotyping may require transferring image or high-volume data using LoRa links. Wireless data transmission protocols used for high-volume data transmission using the link with a low data rate (like LoRa) requiring multiple packets create a significant amount of packet overload. Besides, the reliability of these data transmission protocols is highly dependent on acknowledgement (ACK) messages creating extra load on overall data transmission and hence reducing the application-specific effective data rate (goodput). This research proposes an application layer protocol to improve the goodput while transferring an image or sequential data over the LoRa links in the WSN. It uses dynamic acknowledgement (DACK) protocol for the LoRa physical layer to reduce the ACK message overhead. DACK uses end-of-transmission ACK messaging and transmits multiple packets as a block. It retransmits missing packets after receiving the ACK message at the end of multiple blocks. The goodput depends on the block size and the number of lossy packets that need to be retransmitted. It shows that the DACK LoRa can reduce the total ACK time 10 to 30 times comparing stop-wait protocol and ten times comparing multi-packet ACK protocol. The focused wide-area WSN and mobility requires different matrices to be evaluated. The performance evaluation matrices used for the static WSN do not consider the mobility and the related parameters, such as clustering efficiency in the network and hence cannot evaluate the performance of the proposed wide-area WSN platform supporting mobility. Therefore, new, and modified performance matrices are proposed to measure dynamic performance. It can measure the real-time clustering performance using the mobile data sink and sensor nodes, the cluster size, the coverage area of the WSN and more. All required hardware and software design, dimensioning, and performance evaluation models are also presented
    corecore