145 research outputs found

    An Efficient 1 Iteration Learning Algorithm for Gaussian Mixture Model And Gaussian Mixture Embedding For Neural Network

    Full text link
    We propose an Gaussian Mixture Model (GMM) learning algorithm, based on our previous work of GMM expansion idea. The new algorithm brings more robustness and simplicity than classic Expectation Maximization (EM) algorithm. It also improves the accuracy and only take 1 iteration for learning. We theoretically proof that this new algorithm is guarantee to converge regardless the parameters initialisation. We compare our GMM expansion method with classic probability layers in neural network leads to demonstrably better capability to overcome data uncertainty and inverse problem. Finally, we test GMM based generator which shows a potential to build further application that able to utilized distribution random sampling for stochastic variation as well as variation control

    A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium

    Get PDF
    When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available

    A Statistical Approach to the Alignment of fMRI Data

    Get PDF
    Multi-subject functional Magnetic Resonance Image studies are critical. The anatomical and functional structure varies across subjects, so the image alignment is necessary. We define a probabilistic model to describe functional alignment. Imposing a prior distribution, as the matrix Fisher Von Mises distribution, of the orthogonal transformation parameter, the anatomical information is embedded in the estimation of the parameters, i.e., penalizing the combination of spatially distant voxels. Real applications show an improvement in the classification and interpretability of the results compared to various functional alignment methods

    VIDEO FOREGROUND LOCALIZATION FROM TRADITIONAL METHODS TO DEEP LEARNING

    Get PDF
    These days, detection of Visual Attention Regions (VAR), such as moving objects has become an integral part of many Computer Vision applications, viz. pattern recognition, object detection and classification, video surveillance, autonomous driving, human-machine interaction (HMI), and so forth. The moving object identification using bounding boxes has matured to the level of localizing the objects along their rigid borders and the process is called foreground localization (FGL). Over the decades, many image segmentation methodologies have been well studied, devised, and extended to suit the video FGL. Despite that, still, the problem of video foreground (FG) segmentation remains an intriguing task yet appealing due to its ill-posed nature and myriad of applications. Maintaining spatial and temporal coherence, particularly at object boundaries, persists challenging, and computationally burdensome. It even gets harder when the background possesses dynamic nature, like swaying tree branches or shimmering water body, and illumination variations, shadows cast by the moving objects, or when the video sequences have jittery frames caused by vibrating or unstable camera mounts on a surveillance post or moving robot. At the same time, in the analysis of traffic flow or human activity, the performance of an intelligent system substantially depends on its robustness of localizing the VAR, i.e., the FG. To this end, the natural question arises as what is the best way to deal with these challenges? Thus, the goal of this thesis is to investigate plausible real-time performant implementations from traditional approaches to modern-day deep learning (DL) models for FGL that can be applicable to many video content-aware applications (VCAA). It focuses mainly on improving existing methodologies through harnessing multimodal spatial and temporal cues for a delineated FGL. The first part of the dissertation is dedicated for enhancing conventional sample-based and Gaussian mixture model (GMM)-based video FGL using probability mass function (PMF), temporal median filtering, and fusing CIEDE2000 color similarity, color distortion, and illumination measures, and picking an appropriate adaptive threshold to extract the FG pixels. The subjective and objective evaluations are done to show the improvements over a number of similar conventional methods. The second part of the thesis focuses on exploiting and improving deep convolutional neural networks (DCNN) for the problem as mentioned earlier. Consequently, three models akin to encoder-decoder (EnDec) network are implemented with various innovative strategies to improve the quality of the FG segmentation. The strategies are not limited to double encoding - slow decoding feature learning, multi-view receptive field feature fusion, and incorporating spatiotemporal cues through long-shortterm memory (LSTM) units both in the subsampling and upsampling subnetworks. Experimental studies are carried out thoroughly on all conditions from baselines to challenging video sequences to prove the effectiveness of the proposed DCNNs. The analysis demonstrates that the architectural efficiency over other methods while quantitative and qualitative experiments show the competitive performance of the proposed models compared to the state-of-the-art

    Process fault prediction and prognosis based on a hybrid technique

    Get PDF
    The present study introduces a novel hybrid methodology for fault detection and diagnosis (FDD) and fault prediction and prognosis (FPP). The hybrid methodology combines both data-driven and process knowledge driven techniques. The Hidden Markov Model (HMM) and the auxiliary codes detect and predict the abnormalities based on process history while the Bayesian Network (BN) diagnoses the root cause of the fault based on process knowledge. In the first step, the system performance is evaluated for fault detection and diagnosis and in the second step, prediction and prognosis are evaluated. In both cases, an HMM trained with Normal Operating Condition data is used to determine the log-likelihoods (LL) of each process history data string. It is then used to develop the Conditional Probability Tables of BN while the structure of BN is developed based on process knowledge. Abnormal behaviour of the system is identified through HMM. The time of detection of an abnormality, respective LL value, and the probabilities of being in the process condition at the time of detection are used to generate the likelihood evidence to BN. The updated BN is then used to diagnose the root cause by considering the respective changes of the probabilities. Performance of the new technique is validated with published data of Tennessee Eastman Process. Eight of the ten selected faults were successfully detected and diagnosed. The same set of faults were predicted and prognosed accurately at different levels of maximum added noise

    Unsupervised Methods for Condition-Based Maintenance in Non-Stationary Operating Conditions

    Get PDF
    Maintenance and operation of modern dynamic engineering systems requires the use of robust maintenance strategies that are reliable under uncertainty. One such strategy is condition-based maintenance (CBM), in which maintenance actions are determined based on the current health of the system. The CBM framework integrates fault detection and forecasting in the form of degradation modeling to provide real-time reliability, as well as valuable insight towards the future health of the system. Coupled with a modern information platform such as Internet-of-Things (IoT), CBM can deliver these critical functionalities at scale. The increasingly complex design and operation of engineering systems has introduced novel problems to CBM. Characteristics of these systems - such as the unavailability of historical data, or highly dynamic operating behaviour - has rendered many existing solutions infeasible. These problems have motivated the development of new and self-sufficient - or in other words - unsupervised CBM solutions. The issue, however, is that many of the necessary methods required by such frameworks have yet to be proposed within the literature. Key gaps pertaining to the lack of suitable unsupervised approaches for the pre-processing of non-stationary vibration signals, parameter estimation for fault detection, and degradation threshold estimation, need to be addressed in order to achieve an effective implementation. The main objective of this thesis is to propose set of three novel approaches to address each of the aforementioned knowledge gaps. A non-parametric pre-processing and spectral analysis approach, termed spectral mean shift clustering (S-MSC) - which applies mean shift clustering (MSC) to the short time Fourier transform (STFT) power spectrum for simultaneous de-noising and extraction of time-varying harmonic components - is proposed for the autonomous analysis of non-stationary vibration signals. A second pre-processing approach, termed Gaussian mixture model operating state decomposition (GMM-OSD) - which uses GMMs to cluster multi-modal vibration signals by their respective, unknown operating states - is proposed to address multi-modal non-stationarity. Applied in conjunction with S-MSC, these two approaches form a robust and unsupervised pre-processing framework tailored to the types of signals found in modern engineering systems. The final approach proposed in this thesis is a degradation detection and fault prediction framework, termed the Bayesian one class support vector machine (B-OCSVM), which tackles the key knowledge gaps pertaining to unsupervised parameter and degradation threshold estimation by re-framing the traditional fault detection and degradation modeling problem as a degradation detection and fault prediction problem. Validation of the three aforementioned approaches is performed across a wide range of machinery vibration data sets and applications, including data obtained from two full-scale field pilots located at Toronto Pearson International Airport. The first of which is located on the gearbox of the LINK Automated People Mover (APM) train at Toronto Pearson International Airport; and, the second which is located on a subset of passenger boarding tunnel pre-conditioned air units (PCA) in Terminal 1 of Pearson airport. Results from validation found that the proposed pre-processing approaches and combined pre-processing framework provides a robust and computationally efficient and robust methodology for the analysis of non-stationary vibration signals in unsupervised CBM. Validation of the B-OCSVM framework showed that the proposed parameter estimation approaches enables the earlier detection of the degradation process compared to existing approaches, and the proposed degradation threshold provides a reasonable estimate of the fault manifestation point. Holistically, the approaches proposed in thesis provide a crucial step forward towards the effective implementation of unsupervised CBM in complex, modern engineering systems
    corecore