108 research outputs found

    Edge-Aware Mirror Network for Camouflaged Object Detection

    Full text link
    Existing edge-aware camouflaged object detection (COD) methods normally output the edge prediction in the early stage. However, edges are important and fundamental factors in the following segmentation task. Due to the high visual similarity between camouflaged targets and the surroundings, edge prior predicted in early stage usually introduces erroneous foreground-background and contaminates features for segmentation. To tackle this problem, we propose a novel Edge-aware Mirror Network (EAMNet), which models edge detection and camouflaged object segmentation as a cross refinement process. More specifically, EAMNet has a two-branch architecture, where a segmentation-induced edge aggregation module and an edge-induced integrity aggregation module are designed to cross-guide the segmentation branch and edge detection branch. A guided-residual channel attention module which leverages the residual connection and gated convolution finally better extracts structural details from low-level features. Quantitative and qualitative experiment results show that EAMNet outperforms existing cutting-edge baselines on three widely used COD datasets. Codes are available at https://github.com/sdy1999/EAMNet.Comment: ICME2023 pape

    Object detection, recognition and re-identification in video footage

    Get PDF
    There has been a significant number of security concerns in recent times; as a result, security cameras have been installed to monitor activities and to prevent crimes in most public places. These analysis are done either through video analytic or forensic analysis operations on human observations. To this end, within the research context of this thesis, a proactive machine vision based military recognition system has been developed to help monitor activities in the military environment. The proposed object detection, recognition and re-identification systems have been presented in this thesis. A novel technique for military personnel recognition is presented in this thesis. Initially the detected camouflaged personnel are segmented using a grabcut segmentation algorithm. Since in general a camouflaged personnel's uniform appears to be similar both at the top and the bottom of the body, an image patch is initially extracted from the segmented foreground image and used as the region of interest. Subsequently the colour and texture features are extracted from each patch and used for classification. A second approach for personnel recognition is proposed through the recognition of the badge on the cap of a military person. A feature matching metric based on the extracted Speed Up Robust Features (SURF) from the badge on a personnel's cap enabled the recognition of the personnel's arm of service. A state-of-the-art technique for recognising vehicle types irrespective of their view angle is also presented in this thesis. Vehicles are initially detected and segmented using a Gaussian Mixture Model (GMM) based foreground/background segmentation algorithm. A Canny Edge Detection (CED) stage, followed by morphological operations are used as pre-processing stage to help enhance foreground vehicular object detection and segmentation. Subsequently, Region, Histogram Oriented Gradient (HOG) and Local Binary Pattern (LBP) features are extracted from the refined foreground vehicle object and used as features for vehicle type recognition. Two different datasets with variant views of front/rear and angle are used and combined for testing the proposed technique. For night-time video analytics and forensics, the thesis presents a novel approach to pedestrian detection and vehicle type recognition. A novel feature acquisition technique named, CENTROG, is proposed for pedestrian detection and vehicle type recognition in this thesis. Thermal images containing pedestrians and vehicular objects are used to analyse the performance of the proposed algorithms. The video is initially segmented using a GMM based foreground object segmentation algorithm. A CED based pre-processing step is used to enhance segmentation accuracy prior using Census Transforms for initial feature extraction. HOG features are then extracted from the Census transformed images and used for detection and recognition respectively of human and vehicular objects in thermal images. Finally, a novel technique for people re-identification is proposed in this thesis based on using low-level colour features and mid-level attributes. The low-level colour histogram bin values were normalised to 0 and 1. A publicly available dataset (VIPeR) and a self constructed dataset have been used in the experiments conducted with 7 clothing attributes and low-level colour histogram features. These 7 attributes are detected using features extracted from 5 different regions of a detected human object using an SVM classifier. The low-level colour features were extracted from the regions of a detected human object. These 5 regions are obtained by human object segmentation and subsequent body part sub-division. People are re-identified by computing the Euclidean distance between a probe and the gallery image sets. The experiments conducted using SVM classifier and Euclidean distance has proven that the proposed techniques attained all of the aforementioned goals. The colour and texture features proposed for camouflage military personnel recognition surpasses the state-of-the-art methods. Similarly, experiments prove that combining features performed best when recognising vehicles in different views subsequent to initial training based on multi-views. In the same vein, the proposed CENTROG technique performed better than the state-of-the-art CENTRIST technique for both pedestrian detection and vehicle type recognition at night-time using thermal images. Finally, we show that the proposed 7 mid-level attributes and the low-level features results in improved performance accuracy for people re-identification

    Feature-based tracking of multiple people for intelligent video surveillance.

    Get PDF
    Intelligent video surveillance is the process of performing surveillance task automatically by a computer vision system. It involves detecting and tracking people in the video sequence and understanding their behavior. This thesis addresses the problem of detecting and tracking multiple moving people with unknown background. We have proposed a feature-based framework for tracking, which requires feature extraction and feature matching. We have considered color, size, blob bounding box and motion information as features of people. In our feature-based tracking system, we have proposed to use Pearson correlation coefficient for matching feature-vector with temporal templates. The occlusion problem has been solved by histogram backprojection. Our tracking system is fast and free from assumptions about human structure. We have implemented our tracking system using Visual C++ and OpenCV and tested on real-world images and videos. Experimental results suggest that our tracking system achieved good accuracy and can process videos in 10-15 fps.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2006 .A42. Source: Masters Abstracts International, Volume: 45-01, page: 0347. Thesis (M.Sc.)--University of Windsor (Canada), 2006

    Deep learning algorithms for background subtraction and people detection

    Get PDF
    Video cameras are commonly used today in surveillance and security, autonomous driving and flying, manufacturing and healthcare. While different applications seek different types of information from the video streams, detecting changes and finding people are two key enablers for many of them. This dissertation focuses on both of these tasks: change detection, also known as background subtraction, and people detection from overhead fisheye cameras, an emerging research topic. Background subtraction has been thoroughly researched to date and the top-performing algorithms are data-driven and supervised. Crucially, during training these algorithms rely on the availability of some annotated frames from the video being tested. Instead, we propose a novel, supervised background-subtraction algorithm for unseen videos based on a fully-convolutional neural network. The input to our network consists of the current frame and two background frames captured at different time scales along with their semantic segmentation maps. In order to reduce the chance of overfitting, we introduce novel temporal and spatio-temporal data-augmentation methods. We also propose a cross-validation training/evaluation strategy for the largest change-detection dataset, CDNet-2014, that allows a fair and video-agnostic performance comparison of supervised algorithms. Overall, our algorithm achieves significant performance gains over state of the art in terms of F-measure, recall and precision. Furthermore, we develop a real-time variant of our algorithm with performance close to that of the state of the art. Owing to their large field of view, fisheye cameras mounted overhead are becoming a surveillance modality of choice for large indoor spaces. However, due to their top-down viewpoint and unique optics, standing people appear radially oriented and radially distorted in fisheye images. Therefore, traditional people detection, tracking and recognition algorithms developed for standard cameras do not perform well on fisheye images. To address this, we introduce several novel people-detection algorithms for overhead fisheye cameras. Our first two algorithms address the issue of radial body orientation by applying a rotating-window approach. This approach leverages a state-of-the-art object-detection algorithm trained on standard images and applies additional pre- and post-processing to detect radially-oriented people. Our third algorithm addresses both the radial body orientation and distortion by applying an end-to-end neural network with a novel angle-aware loss function and training on fisheye images. This algorithm outperforms the first two approaches and is two orders of magnitude faster. Finally, we introduce three spatio-temporal extensions of the end-to-end approach to deal with intermittent misses and false detections. In order to evaluate the performance of our algorithms, we collected, annotated and made publicly available four datasets composed of overhead fisheye videos. We provide a detailed analysis of our algorithms on these datasets and show that they significantly outperform the current state of the art

    Parametric region-based foreround segmentation in planar and multi-view sequences

    Get PDF
    Foreground segmentation in video sequences is an important area of the image processing that attracts great interest among the scientist community, since it makes possible the detection of the objects that appear in the sequences under analysis, and allows us to achieve a correct performance of high level applications which use foreground segmentation as an initial step. The current Ph.D. thesis entitled Parametric Region-Based Foreground Segmentation in Planar and Multi-View Sequences details, in the following pages, the research work carried out within this eld. In this investigation, we propose to use parametric probabilistic models at pixel-wise and region level in order to model the di erent classes that are involved in the classi cation process of the di erent regions of the image: foreground, background and, in some sequences, shadow. The development is presented in the following chapters as a generalization of the techniques proposed for objects segmentation in 2D planar sequences to 3D multi-view environment, where we establish a cooperative relationship between all the sensors that are recording the scene. Hence, di erent scenarios have been analyzed in this thesis in order to improve the foreground segmentation techniques: In the first part of this research, we present segmentation methods appropriate for 2D planar scenarios. We start dealing with foreground segmentation in static camera sequences, where a system that combines pixel-wise background model with region-based foreground and shadow models is proposed in a Bayesian classi cation framework. The research continues with the application of this method to moving camera scenarios, where the Bayesian framework is developed between foreground and background classes, both characterized with region-based models, in order to obtain a robust foreground segmentation for this kind of sequences. The second stage of the research is devoted to apply these 2D techniques to multi-view acquisition setups, where several cameras are recording the scene at the same time. At the beginning of this section, we propose a foreground segmentation system for sequences recorded by means of color and depth sensors, which combines di erent probabilistic models created for the background and foreground classes in each one of the views, by taking into account the reliability that each sensor presents. The investigation goes ahead by proposing foreground segregation methods for multi-view smart room scenarios. In these sections, we design two systems where foreground segmentation and 3D reconstruction are combined in order to improve the results of each process. The proposals end with the presentation of a multi-view segmentation system where a foreground probabilistic model is proposed in the 3D space to gather all the object information that appears in the views. The results presented in each one of the proposals show that the foreground segmentation and also the 3D reconstruction can be improved, in these scenarios, by using parametric probabilistic models for modeling the objects to segment, thus introducing the information of the object in a Bayesian classi cation framework.La segmentaci on de objetos de primer plano en secuencias de v deo es una importante area del procesado de imagen que despierta gran inter es por parte de la comunidad cient ca, ya que posibilita la detecci on de objetos que aparecen en las diferentes secuencias en an alisis, y permite el buen funcionamiento de aplicaciones de alto nivel que utilizan esta segmentaci on obtenida como par ametro de entrada. La presente tesis doctoral titulada Parametric Region-Based Foreground Segmentation in Planar and Multi-View Sequences detalla, en las p aginas que siguen, el trabajo de investigaci on desarrollado en este campo. En esta investigaci on se propone utilizar modelos probabil sticos param etricos a nivel de p xel y a nivel de regi on para modelar las diferentes clases que participan en la clasi caci on de las regiones de la imagen: primer plano, fondo y en seg un que secuencias, las regiones de sombra. El desarrollo se presenta en los cap tulos que siguen como una generalizaci on de t ecnicas propuestas para la segmentaci on de objetos en secuencias 2D mono-c amara, al entorno 3D multi-c amara, donde se establece la cooperaci on de los diferentes sensores que participan en la grabaci on de la escena. De esta manera, diferentes escenarios han sido estudiados con el objetivo de mejorar las t ecnicas de segmentaci on para cada uno de ellos: En la primera parte de la investigaci on, se presentan m etodos de segmentaci on para escenarios monoc amara. Concretamente, se comienza tratando la segmentaci on de primer plano para c amara est atica, donde se propone un sistema completo basado en la clasi caci on Bayesiana entre el modelo a nivel de p xel de nido para modelar el fondo, y los modelos a nivel de regi on creados para modelar los objetos de primer plano y la sombra que cada uno de ellos proyecta. La investigaci on prosigue con la aplicaci on de este m etodo a secuencias grabadas mediante c amara en movimiento, donde la clasi caci on Bayesiana se plantea entre las clases de fondo y primer plano, ambas caracterizadas con modelos a nivel de regi on, con el objetivo de obtener una segmentaci on robusta para este tipo de secuencias. La segunda parte de la investigaci on, se centra en la aplicaci on de estas t ecnicas mono-c amara a entornos multi-vista, donde varias c amaras graban conjuntamente la misma escena. Al inicio de dicho apartado, se propone una segmentaci on de primer plano en secuencias donde se combina una c amara de color con una c amara de profundidad en una clasi caci on que combina los diferentes modelos probabil sticos creados para el fondo y el primer plano en cada c amara, a partir de la fi abilidad que presenta cada sensor. La investigaci on prosigue proponiendo m etodos de segmentaci on de primer plano para entornos multi-vista en salas inteligentes. En estos apartados se diseñan dos sistemas donde la segmentaci on de primer plano y la reconstrucci on 3D se combinan para mejorar los resultados de cada uno de estos procesos. Las propuestas fi nalizan con la presentaci on de un sistema de segmentaci on multi-c amara donde se centraliza la informaci on del objeto a segmentar mediante el diseño de un modelo probabil stico 3D. Los resultados presentados en cada uno de los sistemas, demuestran que la segmentacion de primer plano y la reconstrucci on 3D pueden verse mejorados en estos escenarios mediante el uso de modelos probabilisticos param etricos para modelar los objetos a segmentar, introduciendo as la informaci on disponible del objeto en un marco de clasi caci on Bayesiano

    Background Subtraction with Real-time Semantic Segmentation

    Full text link
    Accurate and fast foreground object extraction is very important for object tracking and recognition in video surveillance. Although many background subtraction (BGS) methods have been proposed in the recent past, it is still regarded as a tough problem due to the variety of challenging situations that occur in real-world scenarios. In this paper, we explore this problem from a new perspective and propose a novel background subtraction framework with real-time semantic segmentation (RTSS). Our proposed framework consists of two components, a traditional BGS segmenter B\mathcal{B} and a real-time semantic segmenter S\mathcal{S}. The BGS segmenter B\mathcal{B} aims to construct background models and segments foreground objects. The real-time semantic segmenter S\mathcal{S} is used to refine the foreground segmentation outputs as feedbacks for improving the model updating accuracy. B\mathcal{B} and S\mathcal{S} work in parallel on two threads. For each input frame ItI_t, the BGS segmenter B\mathcal{B} computes a preliminary foreground/background (FG/BG) mask BtB_t. At the same time, the real-time semantic segmenter S\mathcal{S} extracts the object-level semantics St{S}_t. Then, some specific rules are applied on Bt{B}_t and St{S}_t to generate the final detection Dt{D}_t. Finally, the refined FG/BG mask Dt{D}_t is fed back to update the background model. Comprehensive experiments evaluated on the CDnet 2014 dataset demonstrate that our proposed method achieves state-of-the-art performance among all unsupervised background subtraction methods while operating at real-time, and even performs better than some deep learning based supervised algorithms. In addition, our proposed framework is very flexible and has the potential for generalization

    Listening in a second language: a pupillometric investigation of the effect of semantic and acoustic cues on listening effort

    Get PDF
    Non-native listeners live a great part of their day immersed in a second language environment. Challenges arise because many linguistic interactions happen in noisy environments, and because their linguistic knowledge is imperfect. Pupillometry was shown to provide a reliable measure of cognitive effort during listening. This research aims to investigate by means of pupillometry how listening effort is modulated by the intelligibility level of the listening task, the availability of contextual and acoustic cues and by the language background of listeners (native vs non-native). In Study 1, listening effort in native and non-native listeners was evaluated during a sentence perception task in noise across different intelligibility levels. Results indicated that listening effort was increased for non-native compared to native listeners, when the intelligibility levels were equated across the two groups. In Study 2, using a similar method, materials included predictable and semantically anomalous sentences, presented in a plain and a clear speaking style. Results confirmed an increased listening effort for non-native compared to native listeners. Listening effort was overall reduced when participants attended to clear speech. Moreover, effort reduction after the sentence ended was delayed for less proficient non-native listeners. In Study 3, the contribution of semantic content spanning over several sentences was evaluated using lists of semantically related and unrelated stimuli. The presence of semantic cues across sentences led to a reduction in listening effort for native listeners as reflected by the peak pupil dilation, while non-native listeners did not show the same benefit. In summary, this research consistently showed an increased listening effort for non-native compared to native listeners, at equated levels of intelligibility. Additionally, the use of a clear speaking style proved to be an effective strategy to enhance comprehension and to reduce cognitive effort in native and non-native listeners
    corecore