9 research outputs found

    An Extended Occlusion Detection Approach for Video Processing

    Get PDF
    Occlusions become conspicuous as failure regions in video processing when unified over time because the contraventions of the restriction of brightness have accumulated and evolved in occluded regions. The accuracy at the boundaries of the moving objects is one of the challenging areas that required further exploration and research. This paper presents the work in process approach that can detect occlusion regions by using pixel-wise coherence, segment-wise confidence and interpolation technique. Our method can get the same result as usual methods by solving only one Partial Differential Equations (PDE) problem; it is superior to existing methods because it is faster and provides better coverage rates for occlusion regions than variation techniques when tested against a varied number of benchmark datasets. With these improved results, we can apply and extend our approach to a wider range of applications in computer vision, such as background subtraction, tracking, 3D reconstruction, video surveillance, video compression

    Exploiting Context-Aware Event Data for Fault Analysis

    Get PDF
    Fault analysis in communication networks and distributed systems is a difficult process that heavily depends on system administrator’s experience and supporting tools. This process usually requires analytic techniques and several types of event data including log events, debug messages, trace obtained from these systems to investigate the root cause of faults. This paper introduces an approach of exploiting context-aware data and classification technique for improving this process. This approach uses both event data and context-aware data including CPU load, memory, processes, temperature, status to train a decision tree, and then applies the tree to assess suspected events. We have implemented and experimented the approach on the OpenStack cloud computing system with the Hadoop computing service and MELA event collection system. The experimental results reveal that the accuracy score of the approach reaches 85% on average. The paper also includes detailed analysis for the results

    Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces

    Get PDF
    In the field of traffic monitoring systems, shadows are the main causes of errors in computer vision-based vehicle detection and classification. A great number of  research have been carried out to detect and remove shadows. However, these research works only focused on solving shadow problems in daytime traffic scenes. Up to now, far too little attention has been paid to the problem caused by vehicles’ reflections in rainy conditions. Unlike shadows in the daytime, which are homogeneous gray shades, reflection shadows are inhomogeneous regions of different colors. This characteristic makes reflections harder to detect and remove. Therefore, in this paper, we aim to develop a reflection detection and removal method from single images or video. Reflections are detected by determining a combination of L and B channels from LAB color space and H channel from HSV color space. The reflection removal method is performed by determining the optimal intensity of reflected areas so that they match with neighbor regions. The advantage of our method is that all reflected areas are removed without affecting vehicles’ textures or details

    Real-Time Change Detection with Convolutional Density Approximation

    Get PDF
    Background Subtraction (BgS) is a widely researched technique to develop online Change Detection algorithms for static video cameras. Many BgS methods have employed the unsupervised, adaptive approach of Gaussian Mixture Model (GMM) to produce decent backgrounds, but they lack proper consideration of scene semantics to produce better foregrounds. On the other hand, with considerable computational expenses, BgS with Deep Neural Networks (DNN) is able to produce accurate background and foreground segments. In our research, we blend both approaches for the best. First, we formulated a network called Convolutional Density Approximation (CDA) for direct density estimation of background models. Then, we propose a self-supervised training strategy for CDA to adaptively capture high-frequency color distributions for the corresponding backgrounds. Finally, we show that background models can indeed assist foreground extraction by an efficient Neural Motion Subtraction (NeMos) network. Our experiments verify competitive results in the balance between effectiveness and efficiency

    TensorMoG: A Tensor-Driven Gaussian Mixture Model with Dynamic Scene Adaptation for Background Modelling

    No full text
    Decades of ongoing research have shown that background modelling is a very powerful technique, which is used in intelligent surveillance systems, in order to extract features of interest, known as foregrounds. In order to work with the dynamic nature of different scenes, many techniques of background modelling adopted the unsupervised approach of Gaussian Mixture Model with an iterative paradigm. Although the technique has had much success, a problem occurs in cases of sudden scene changes with high variation (e.g., illumination changes, camera jittering) that the model unknowingly and unnecessarily takes into account those effects and distorts the results. Therefore, this paper proposes an unsupervised, parallelized, and tensor-based approach that algorithmically works with entropy estimations. These entropy estimations are used in order to assess the uncertainty level of a constructed background, which predicts both the present and future variations from the inputs, thereby opting to use either the incoming frames to update the background or simply discard them. Our experiments suggest that this method is highly integrable into a surveillance system that consists of other functions and can be competitive with state-of-the-art methods in terms of processing speed

    Change Detection by Training a Triplet Network for Motion Feature Extraction

    No full text

    DAKRS: Domain Adaptive Knowledge-Based Retrieval System for Natural Language-Based Vehicle Retrieval

    No full text
    Given Natural Language (NL) text descriptions, NL-based vehicle retrieval aims to extract target vehicles from a multi-view multi-camera traffic video pool. Solutions to the problem have been challenged by not only inherent distinctions between textual and visual domains, but also by the complexities of the high-dimensionality of visual data, the diverse range of textual descriptions, a major lack of high-volume datasets in this relatively new field, alongside prominently large domain gaps between training and test sets. To deal with these issues, existing approaches have advocated computationally expensive models to separately extract the subspaces of language and vision before blending them into the same shared representation space. Through our proposed Domain Adaptive Knowledge-based Retrieval System (DAKRS), we show that by taking advantage of multi-modal information in a pretrained model, we can better focus on training robust representations in the shared space of limited labels, rather than on robust extraction of uni-modal representations that comes with increased computational burdens. Our contributions are threefold: (i) An efficient extension of Contrastive Language-Image Pre-training (CLIP)’s transfer learning into a baseline text-to-image multi-modular vehicle retrieval framework; (ii) A data enhancement method to create pseudo-vehicle tracks from the traffic video pool by leveraging the robustness of baseline retrieval model combined with background subtraction; and (iii) A Semi-Supervised Domain Adaptation (SSDA) scheme to engineer pseudo-labels for adapting model parameters to the target domain. Experimental results are benchmarked on Cityflow-NL to obtain 63.20% MRR with 150.0 M of parameters, illustrating our competitive effectiveness and efficiency against state-of-the-arts, without ensembling

    High variation removal for background subtraction in traffic surveillance systems

    No full text
    Background subtraction has been a fundamental task in video analytics and smart surveillance applications. In the field of background subtraction, Gaussian mixture model is a canonical model for many other methods. However, the unconscious learning of this model often leads to erroneous motion detection under high variation scenes. This article proposes a new method that incorporates entropy estimation and a removal framework into the Gaussian mixture model to improve the performance of background subtraction. Firstly, entropy information is computed for each pixel of a frame to classify frames into silent or high variation categories. Secondly, the removal framework is used to determine which frames from the background subtraction process are updated. The proposed method produces precise results with fast execution time, which are two critical factors in surveillance systems for more advanced tasks. The authors used two publicly available test sequences from the 2014 Change Detection and Scene background modelling data sets and internally collected data sets of scenes with dense traffic
    corecore