623 research outputs found

    The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments

    Get PDF
    In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident. In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion. This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture. Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data. As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Cognitive Decay And Memory Recall During Long Duration Spaceflight

    Get PDF
    This dissertation aims to advance the efficacy of Long-Duration Space Flight (LDSF) pre-flight and in-flight training programs, acknowledging existing knowledge gaps in NASA\u27s methodologies. The research\u27s objective is to optimize the cognitive workload of LDSF crew members, enhance their neurocognitive functionality, and provide more meaningful work experiences, particularly for Mars missions.The study addresses identified shortcomings in current training and learning strategies and simulation-based training systems, focusing on areas requiring quantitative measures for astronaut proficiency and training effectiveness assessment. The project centers on understanding cognitive decay and memory loss under LDSF-related stressors, seeking to establish when such cognitive decline exceeds acceptable performance levels throughout mission phases. The research acknowledges the limitations of creating a near-orbit environment due to resource constraints and the need to develop engaging tasks for test subjects. Nevertheless, it underscores the potential impact on future space mission training and other high-risk professions. The study further explores astronaut training complexities, the challenges encountered in LDSF missions, and the cognitive processes involved in such demanding environments. The research employs various cognitive and memory testing events, integrating neuroimaging techniques to understand cognition\u27s neural mechanisms and memory. It also explores Rasmussen\u27s S-R-K behaviors and Brain Network Theory’s (BNT) potential for measuring forgetting, cognition, and predicting training needs. The multidisciplinary approach of the study reinforces the importance of integrating insights from cognitive psychology, behavior analysis, and brain connectivity research. Research experiments were conducted at the University of North Dakota\u27s Integrated Lunar Mars Analog Habitat (ILMAH), gathering data from selected subjects via cognitive neuroscience tools and Electroencephalography (EEG) recordings to evaluate neurocognitive performance. The data analysis aimed to assess brain network activations during mentally demanding activities and compare EEG power spectra across various frequencies, latencies, and scalp locations. Despite facing certain challenges, including inadequacies of the current adapter boards leading to analysis failure, the study provides crucial lessons for future research endeavors. It highlights the need for swift adaptation, continual process refinement, and innovative solutions, like the redesign of adapter boards for high radio frequency noise environments, for the collection of high-quality EEG data. In conclusion, while the research did not reveal statistically significant differences between the experimental and control groups, it furnished valuable insights and underscored the need to optimize astronaut performance, well-being, and mission success. The study contributes to the ongoing evolution of training methodologies, with implications for future space exploration endeavors

    Musiktheorie als interdisziplinäres Fach: 8. Kongress der Gesellschaft für Musiktheorie Graz 2008

    Get PDF
    Im Oktober 2008 fand an der Universität für Musik und darstellende Kunst Graz (KUG) der 8. Kongress der Gesellschaft für Musiktheorie (GMTH) zum Thema »Musiktheorie als interdisziplinäres Fach« statt. Die hier vorgelegten gesammelten Beiträge akzentuieren Musiktheorie als multiperspektivische wissenschaftliche Disziplin in den Spannungsfeldern Theorie/Praxis, Kunst/Wissenschaft und Historik/Systematik. Die sechs Kapitel ergründen dabei die Grenzbereiche zur Musikgeschichte, Musikästhetik, zur Praxis musikalischer Interpretation, zur kompositorischen Praxis im 20. und 21. Jahrhundert, zur Ethnomusikologie sowie zur Systematischen Musikwissenschaft. Insgesamt 45 Aufsätze, davon 28 in deutscher, 17 in englischer Sprache, sowie die Dokumentation einer Podiumsdiskussion zeichnen in ihrer Gesamtheit einen höchst lebendigen und gegenwartsbezogenen Diskurs, der eine einzigartige Standortbestimmung des Fachs Musiktheorie bietet.The 8th congress of the Gesellschaft für Musiktheorie (GMTH) took place in October 2008 at the University for Music and Dramatic Arts Graz (KUG) on the topic »Music Theory and Interdisciplinarity«. The collected contributions characterize music theory as a multi-faceted scholarly discipline at the intersection of theory/practice, art/science and history/system. The six chapters explore commonalties with music history, music aesthetics, musical performance, compositional practice in twentieth- and twenty-first-century music, ethnomusicology and systematic musicology. A total of 45 essays (28 in German, 17 in English) and the documentation of a panel discussion form a vital discourse informed by contemporaneous issues of research in a broad number of fields, providing a unique overview of music theory today. A comprehensive English summary appears at the beginning of all contributions

    Optimising multimodal fusion for biometric identification systems

    Get PDF
    Biometric systems are automatic means for imitating the human brain’s ability of identifying and verifying other humans by their behavioural and physiological characteristics. A system, which uses more than one biometric modality at the same time, is known as a multimodal system. Multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically provide better recognition performance compared to systems based on a single biometric modality. This thesis addresses some issues related to the implementation of multimodal biometric identity verification systems. The thesis assesses the feasibility of using commercial offthe-shelf products to construct deployable multimodal biometric system. It also identifies multimodal biometric fusion as a challenging optimisation problem when one considers the presence of several configurations and settings, in particular the verification thresholds adopted by each biometric device and the decision fusion algorithm implemented for a particular configuration. The thesis proposes a novel approach for the optimisation of multimodal biometric systems based on the use of genetic algorithms for solving some of the problems associated with the different settings. The proposed optimisation method also addresses some of the problems associated with score normalization. In addition, the thesis presents an analysis of the performance of different fusion rules when characterising the system users as sheep, goats, lambs and wolves. The results presented indicate that the proposed optimisation method can be used to solve the problems associated with threshold settings. This clearly demonstrates a valuable potential strategy that can be used to set a priori thresholds of the different biometric devices before using them. The proposed optimisation architecture addressed the problem of score normalisation, which makes it an effective “plug-and-play” design philosophy to system implementation. The results also indicate that the optimisation approach can be used for effectively determining the weight settings, which is used in many applications for varying the relative importance of the different performance parameters

    Improved Human Face Recognition by Introducing a New Cnn Arrangement and Hierarchical Method

    Get PDF
    Human face recognition has become one of the most attractive topics in the fields ‎of biometrics due to its wide applications. The face is a part of the body that carries ‎the most information regarding identification in human interactions. Features such ‎as the composition of facial components, skin tone, face\u27s central axis, distances ‎between eyes, and many more, alongside the other biometrics, are used ‎unconsciously by the brain to distinguish a person. Indeed, analyzing the facial ‎features could be the first method humans use to identify a person in their lives. ‎As one of the main biometric measures, human face recognition has been utilized in ‎various commercial applications over the past two decades. From banking to smart ‎advertisement and from border security to mobile applications. These are a few ‎examples that show us how far these methods have come. We can confidently say ‎that the techniques for face recognition have reached an acceptable level of ‎accuracy to be implemented in some real-life applications. However, there are other ‎applications that could benefit from improvement. Given the increasing demand ‎for the topic and the fact that nowadays, we have almost all the infrastructure that ‎we might need for our application, make face recognition an appealing topic. ‎ When we are evaluating the quality of a face recognition method, there are some ‎benchmarks that we should consider: accuracy, speed, and complexity are the main ‎parameters. Of course, we can measure other aspects of the algorithm, such as size, ‎precision, cost, etc. But eventually, every one of those parameters will contribute to ‎improving one or some of these three concepts of the method. Then again, although ‎we can see a significant level of accuracy in existing algorithms, there is still much ‎room for improvement in speed and complexity. In addition, the accuracy of the ‎mentioned methods highly depends on the properties of the face images. In other ‎words, uncontrolled situations and variables like head pose, occlusion, lighting, ‎image noise, etc., can affect the results dramatically. ‎ Human face recognition systems are used in either identification or verification. In ‎verification, the system\u27s main goal is to check if an input belongs to a pre-determined tag or a person\u27s ID. ‎Almost every face recognition system consists of four major steps. These steps are ‎pre-processing, face detection, feature extraction, and classification. Improvement ‎in each of these steps will lead to the overall enhancement of the system. In this ‎work, the main objective is to propose new, improved and enhanced methods in ‎each of those mentioned steps, evaluate the results by comparing them with other ‎existing techniques and investigate the outcome of the proposed system.

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Mixture-Based Clustering and Hidden Markov Models for Energy Management and Human Activity Recognition: Novel Approaches and Explainable Applications

    Get PDF
    In recent times, the rapid growth of data in various fields of life has created an immense need for powerful tools to extract useful information from data. This has motivated researchers to explore and devise new ideas and methods in the field of machine learning. Mixture models have gained substantial attention due to their ability to handle high-dimensional data efficiently and effectively. However, when adopting mixture models in such spaces, four crucial issues must be addressed, including the selection of probability density functions, estimation of mixture parameters, automatic determination of the number of components, identification of features that best discriminate the different components, and taking into account the temporal information. The primary objective of this thesis is to propose a unified model that addresses these interrelated problems. Moreover, this thesis proposes a novel approach that incorporates explainability. This thesis presents innovative mixture-based modelling approaches tailored for diverse applications, such as household energy consumption characterization, energy demand management, fault detection and diagnosis and human activity recognition. The primary contributions of this thesis encompass the following aspects: Initially, we propose an unsupervised feature selection approach embedded within a finite bounded asymmetric generalized Gaussian mixture model. This model is adept at handling synthetic and real-life smart meter data, utilizing three distinct feature extraction methods. By employing the expectation-maximization algorithm in conjunction with the minimum message length criterion, we are able to concurrently estimate the model parameters, perform model selection, and execute feature selection. This unified optimization process facilitates the identification of household electricity consumption profiles along with the optimal subset of attributes defining each profile. Furthermore, we investigate the impact of household characteristics on electricity usage patterns to pinpoint households that are ideal candidates for demand reduction initiatives. Subsequently, we introduce a semi-supervised learning approach for the mixture of mixtures of bounded asymmetric generalized Gaussian and uniform distributions. The integration of the uniform distribution within the inner mixture bolsters the model's resilience to outliers. In the unsupervised learning approach, the minimum message length criterion is utilized to ascertain the optimal number of mixture components. The proposed models are validated through a range of applications, including chiller fault detection and diagnosis, occupancy estimation, and energy consumption characterization. Additionally, we incorporate explainability into our models and establish a moderate trade-off between prediction accuracy and interpretability. Finally, we devise four novel models for human activity recognition (HAR): bounded asymmetric generalized Gaussian mixture-based hidden Markov model with feature selection~(BAGGM-FSHMM), bounded asymmetric generalized Gaussian mixture-based hidden Markov model~(BAGGM-HMM), asymmetric generalized Gaussian mixture-based hidden Markov model with feature selection~(AGGM-FSHMM), and asymmetric generalized Gaussian mixture-based hidden Markov model~(AGGM-HMM). We develop an innovative method for simultaneous estimation of feature saliencies and model parameters in BAGGM-FSHMM and AGGM-FSHMM while integrating the bounded support asymmetric generalized Gaussian distribution~(BAGGD), the asymmetric generalized Gaussian distribution~(AGGD) in the BAGGM-HMM and AGGM-HMM respectively. The aforementioned proposed models are validated using video-based and sensor-based HAR applications, showcasing their superiority over several mixture-based hidden Markov models~(HMMs) across various performance metrics. We demonstrate that the independent incorporation of feature selection and bounded support distribution in a HAR system yields benefits; Simultaneously, combining both concepts results in the most effective model among the proposed models
    corecore