21 research outputs found

    Health assessment and fault diagnosis for centrifugal pumps using Softmax regression

    Get PDF
    Real-time health monitoring of industrial components and systems that can detect, classify, and predict impending faults is critical to reduce operating and maintenance costs. This paper presents a softmax regression-based prognostic method for on-line health assessment and fault diagnosis. System conditions are evaluated by processing the information gathered from access controllers or sensors mounted at different points in the system, and maintenance is performed only when the failure or malfunction prognosis is indicated. Wavelet packet decomposition and fast Fourier transform techniques are used to extract features from non-stationary vibration signals. Wavelet packet energies and fundamental frequency amplitude are used as features, and principal component analysis is used for feature reduction. Reduced features are input into softmax regression models to assess machine health and identify possible failure modes. The gradient descent method is used to determine the parameters of softmax regression models. The effectiveness and feasibility of the proposed method are illustrated by applying to a real application

    Vertical Federated Learning

    Full text link
    Vertical Federated Learning (VFL) is a federated learning setting where multiple parties with different features about the same set of users jointly train machine learning models without exposing their raw data or model parameters. Motivated by the rapid growth in VFL research and real-world applications, we provide a comprehensive review of the concept and algorithms of VFL, as well as current advances and challenges in various aspects, including effectiveness, efficiency, and privacy. We provide an exhaustive categorization for VFL settings and privacy-preserving protocols and comprehensively analyze the privacy attacks and defense strategies for each protocol. In the end, we propose a unified framework, termed VFLow, which considers the VFL problem under communication, computation, privacy, and effectiveness constraints. Finally, we review the most recent advances in industrial applications, highlighting open challenges and future directions for VFL

    Feature Selection and Non-Euclidean Dimensionality Reduction: Application to Electrocardiology.

    Full text link
    Heart disease has been the leading cause of human death for decades. To improve treatment of heart disease, algorithms to perform reliable computer diagnosis using electrocardiogram (ECG) data have become an area of active research. This thesis utilizes well-established methods from cluster analysis, classification, and localization to cluster and classify ECG data, and aims to help clinicians diagnose and treat heart diseases. The power of these methods is enhanced by state-of-the-art feature selection and dimensionality reduction. The specific contributions of this thesis are as follows. First, a unique combination of ECG feature selection and mixture model clustering is introduced to classify the sites of origin of ventricular tachycardias. Second, we apply a restricted Boltzmann machine (RBM) to learn sparse representations of ECG signals and to build an enriched classifier from patient data. Third, a novel manifold learning algorithm is introduced, called Quaternion Laplacian Information Maps (QLIM), and is applied to visualize high-dimensional ECG signals. These methods are applied to design of an automated supervised classification algorithm to help a physician identify the origin of ventricular arrhythmias (VA) directed from a patient's ECG data. The algorithm is trained on a large database of ECGs and catheter positions collected during the electrophysiology (EP) pace-mapping procedures. The proposed algorithm is demonstrated to have a correct classification rate of over 80% for the difficult task of classifying VAs having epicardial or endocardial origins.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113303/1/dyjung_1.pd

    An enhanced gated recurrent unit with auto-encoder for solving text classification problems

    Get PDF
    Classification has become an important task for categorizing documents automatically based on their respective groups. Gated Recurrent Unit (GRU) is a type of Recurrent Neural Networks (RNNs), and a deep learning algorithm that contains update gate and reset gate. It is considered as one of the most efficient text classification techniques, specifically on sequential datasets. However, GRU suffered from three major issues when it is applied for solving the text classification problems. The first drawback is the failure in data dimensionality reduction, which leads to low quality solution for the classification problems. Secondly, GRU still has difficulty in training procedure due to redundancy between update and reset gates. The reset gate creates complexity and require high processing time. Thirdly, GRU also has a problem with informative features loss in each recurrence during the training phase and high computational cost. The reason behind this failure is due to a random selection of features from datasets (or previous outputs), when applied in its standard form. Therefore, in this research, a new model namely Encoder Simplified GRU (ES-GRU) is proposed to reduce dimension of data using an Auto-Encoder (AE). Accordingly, the reset gate is replaced with an update gate in order to reduce the redundancy and complexity in the standard GRU. Finally, a Batch Normalization method is incorporated in the GRU and AE for improving the performance of the proposed ES-GRU model. The proposed model has been evaluated on seven benchmark text datasets and compared with six baselines well-known multiclass text classification approaches included standard GRU, AE, Long Short Term Memory, Convolutional Neural Network, Support Vector Machine, and NaΓ―ve Bayes. Based on various types of performance evaluation parameters, a considerable amount of improvement has been observed in the performance of the proposed model as compared to other standard classification techniques, and showed better effectiveness and efficiency of the developed model

    Artificial Intelligence Tools for Facial Expression Analysis.

    Get PDF
    Inner emotions show visibly upon the human face and are understood as a basic guide to an individual’s inner world. It is, therefore, possible to determine a person’s attitudes and the effects of others’ behaviour on their deeper feelings through examining facial expressions. In real world applications, machines that interact with people need strong facial expression recognition. This recognition is seen to hold advantages for varied applications in affective computing, advanced human-computer interaction, security, stress and depression analysis, robotic systems, and machine learning. This thesis starts by proposing a benchmark of dynamic versus static methods for facial Action Unit (AU) detection. AU activation is a set of local individual facial muscle parts that occur in unison constituting a natural facial expression event. Detecting AUs automatically can provide explicit benefits since it considers both static and dynamic facial features. For this research, AU occurrence activation detection was conducted by extracting features (static and dynamic) of both nominal hand-crafted and deep learning representation from each static image of a video. This confirmed the superior ability of a pretrained model that leaps in performance. Next, temporal modelling was investigated to detect the underlying temporal variation phases using supervised and unsupervised methods from dynamic sequences. During these processes, the importance of stacking dynamic on top of static was discovered in encoding deep features for learning temporal information when combining the spatial and temporal schemes simultaneously. Also, this study found that fusing both temporal and temporal features will give more long term temporal pattern information. Moreover, we hypothesised that using an unsupervised method would enable the leaching of invariant information from dynamic textures. Recently, fresh cutting-edge developments have been created by approaches based on Generative Adversarial Networks (GANs). In the second section of this thesis, we propose a model based on the adoption of an unsupervised DCGAN for the facial features’ extraction and classification to achieve the following: the creation of facial expression images under different arbitrary poses (frontal, multi-view, and in the wild), and the recognition of emotion categories and AUs, in an attempt to resolve the problem of recognising the static seven classes of emotion in the wild. Thorough experimentation with the proposed cross-database performance demonstrates that this approach can improve the generalization results. Additionally, we showed that the features learnt by the DCGAN process are poorly suited to encoding facial expressions when observed under multiple views, or when trained from a limited number of positive examples. Finally, this research focuses on disentangling identity from expression for facial expression recognition. A novel technique was implemented for emotion recognition from a single monocular image. A large-scale dataset (Face vid) was created from facial image videos which were rich in variations and distribution of facial dynamics, appearance, identities, expressions, and 3D poses. This dataset was used to train a DCNN (ResNet) to regress the expression parameters from a 3D Morphable Model jointly with a back-end classifier

    3D Face Modelling, Analysis and Synthesis

    Get PDF
    Human faces have always been of a special interest to researchers in the computer vision and graphics areas. There has been an explosion in the number of studies around accurately modelling, analysing and synthesising realistic faces for various applications. The importance of human faces emerges from the fact that they are invaluable means of effective communication, recognition, behaviour analysis, conveying emotions, etc. Therefore, addressing the automatic visual perception of human faces efficiently could open up many influential applications in various domains, e.g. virtual/augmented reality, computer-aided surgeries, security and surveillance, entertainment, and many more. However, the vast variability associated with the geometry and appearance of human faces captured in unconstrained videos and images renders their automatic analysis and understanding very challenging even today. The primary objective of this thesis is to develop novel methodologies of 3D computer vision for human faces that go beyond the state of the art and achieve unprecedented quality and robustness. In more detail, this thesis advances the state of the art in 3D facial shape reconstruction and tracking, fine-grained 3D facial motion estimation, expression recognition and facial synthesis with the aid of 3D face modelling. We give a special attention to the case where the input comes from monocular imagery data captured under uncontrolled settings, a.k.a. \textit{in-the-wild} data. This kind of data are available in abundance nowadays on the internet. Analysing these data pushes the boundaries of currently available computer vision algorithms and opens up many new crucial applications in the industry. We define the four targeted vision problems (3D facial reconstruction &\& tracking, fine-grained 3D facial motion estimation, expression recognition, facial synthesis) in this thesis as the four 3D-based essential systems for the automatic facial behaviour understanding and show how they rely on each other. Finally, to aid the research conducted in this thesis, we collect and annotate a large-scale videos dataset of monocular facial performances. All of our proposed methods demonstarte very promising quantitative and qualitative results when compared to the state-of-the-art methods

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews

    BIG DATA ΠΈ Π°Π½Π°Π»ΠΈΠ· высокого уровня : ΠΌΠ°Ρ‚Π΅Ρ€ΠΈΠ°Π»Ρ‹ ΠΊΠΎΠ½Ρ„Π΅Ρ€Π΅Π½Ρ†ΠΈΠΈ

    Get PDF
    Π’ сборникС ΠΎΠΏΡƒΠ±Π»ΠΈΠΊΠΎΠ²Π°Π½Ρ‹ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ‹ Π½Π°ΡƒΡ‡Π½Ρ‹Ρ… исслСдований ΠΈ Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚ΠΎΠΊ Π² области BIG DATA and Advanced Analytics для ΠΎΠΏΡ‚ΠΈΠΌΠΈΠ·Π°Ρ†ΠΈΠΈ IT-Ρ€Π΅ΡˆΠ΅Π½ΠΈΠΉ ΠΈ бизнСс-Ρ€Π΅ΡˆΠ΅Π½ΠΈΠΉ, Π° Ρ‚Π°ΠΊΠΆΠ΅ тСматичСских исслСдований Π² области ΠΌΠ΅Π΄ΠΈΡ†ΠΈΠ½Ρ‹, образования ΠΈ экологии
    corecore