807 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Advances in Monocular Exemplar-based Human Body Pose Analysis: Modeling, Detection and Tracking

    Get PDF
    Esta tesis contribuye en el anĂĄlisis de la postura del cuerpo humano a partir de secuencias de imĂĄgenes adquiridas con una sola cĂĄmara. Esta temĂĄtica presenta un amplio rango de potenciales aplicaciones en video-vigilancia, video-juegos o aplicaciones biomĂ©dicas. Las tĂ©cnicas basadas en patrones han tenido Ă©xito, sin embargo, su precisiĂłn depende de la similitud del punto de vista de la cĂĄmara y de las propiedades de la escena entre las imĂĄgenes de entrenamiento y las de prueba. Teniendo en cuenta un conjunto de datos de entrenamiento capturado mediante un nĂșmero reducido de cĂĄmaras fijas, paralelas al suelo, se han identificado y analizado tres escenarios posibles con creciente nivel de dificultad: 1) una cĂĄmara estĂĄtica paralela al suelo, 2) una cĂĄmara de vigilancia fija con un ĂĄngulo de visiĂłn considerablemente diferente, y 3) una secuencia de video capturada con una cĂĄmara en movimiento o simplemente una sola imagen estĂĄtica

    Methods for Analysing Endothelial Cell Shape and Behaviour in Relation to the Focal Nature of Atherosclerosis

    Get PDF
    The aim of this thesis is to develop automated methods for the analysis of the spatial patterns, and the functional behaviour of endothelial cells, viewed under microscopy, with applications to the understanding of atherosclerosis. Initially, a radial search approach to segmentation was attempted in order to trace the cell and nuclei boundaries using a maximum likelihood algorithm; it was found inadequate to detect the weak cell boundaries present in the available data. A parametric cell shape model was then introduced to fit an equivalent ellipse to the cell boundary by matching phase-invariant orientation fields of the image and a candidate cell shape. This approach succeeded on good quality images, but failed on images with weak cell boundaries. Finally, a support vector machines based method, relying on a rich set of visual features, and a small but high quality training dataset, was found to work well on large numbers of cells even in the presence of strong intensity variations and imaging noise. Using the segmentation results, several standard shear-stress dependent parameters of cell morphology were studied, and evidence for similar behaviour in some cell shape parameters was obtained in in-vivo cells and their nuclei. Nuclear and cell orientations around immature and mature aortas were broadly similar, suggesting that the pattern of flow direction near the wall stayed approximately constant with age. The relation was less strong for the cell and nuclear length-to-width ratios. Two novel shape analysis approaches were attempted to find other properties of cell shape which could be used to annotate or characterise patterns, since a wide variability in cell and nuclear shapes was observed which did not appear to fit the standard parameterisations. Although no firm conclusions can yet be drawn, the work lays the foundation for future studies of cell morphology. To draw inferences about patterns in the functional response of cells to flow, which may play a role in the progression of disease, single-cell analysis was performed using calcium sensitive florescence probes. Calcium transient rates were found to change with flow, but more importantly, local patterns of synchronisation in multi-cellular groups were discernable and appear to change with flow. The patterns suggest a new functional mechanism in flow-mediation of cell-cell calcium signalling

    Review of Person Re-identification Techniques

    Full text link
    Person re-identification across different surveillance cameras with disjoint fields of view has become one of the most interesting and challenging subjects in the area of intelligent video surveillance. Although several methods have been developed and proposed, certain limitations and unresolved issues remain. In all of the existing re-identification approaches, feature vectors are extracted from segmented still images or video frames. Different similarity or dissimilarity measures have been applied to these vectors. Some methods have used simple constant metrics, whereas others have utilised models to obtain optimised metrics. Some have created models based on local colour or texture information, and others have built models based on the gait of people. In general, the main objective of all these approaches is to achieve a higher-accuracy rate and lowercomputational costs. This study summarises several developments in recent literature and discusses the various available methods used in person re-identification. Specifically, their advantages and disadvantages are mentioned and compared.Comment: Published 201

    Robust gait recognition under variable covariate conditions

    Get PDF
    PhDGait is a weak biometric when compared to face, fingerprint or iris because it can be easily affected by various conditions. These are known as the covariate conditions and include clothing, carrying, speed, shoes and view among others. In the presence of variable covariate conditions gait recognition is a hard problem yet to be solved with no working system reported. In this thesis, a novel gait representation, the Gait Flow Image (GFI), is proposed to extract more discriminative information from a gait sequence. GFI extracts the relative motion of body parts in different directions in separate motion descriptors. Compared to the existing model-free gait representations, GFI is more discriminative and robust to changes in covariate conditions. In this thesis, gait recognition approaches are evaluated without the assumption on cooperative subjects, i.e. both the gallery and the probe sets consist of gait sequences under different and unknown covariate conditions. The results indicate that the performance of the existing approaches drops drastically under this more realistic set-up. It is argued that selecting the gait features which are invariant to changes in covariate conditions is the key to developing a gait recognition system without subject cooperation. To this end, the Gait Entropy Image (GEnI) is proposed to perform automatic feature selection on each pair of gallery and probe gait sequences. Moreover, an Adaptive Component and Discriminant Analysis is formulated which seamlessly integrates the feature selection method with subspace analysis for fast and robust recognition. Among various factors that affect the performance of gait recognition, change in viewpoint poses the biggest problem and is treated separately. A novel approach to address this problem is proposed in this thesis by using Gait Flow Image in a cross view gait recognition framework with the view angle of a probe gait sequence unknown. A Gaussian Process classification technique is formulated to estimate the view angle of each probe gait sequence. To measure the similarity of gait sequences across view angles, the correlation of gait sequences from different views is modelled using Canonical Correlation Analysis and the correlation strength is used as a similarity measure. This differs from existing approaches, which reconstruct gait features in different views through 2D view transformation or 3D calibration. Without explicit reconstruction, the proposed method can cope with feature mis-match across view and is more robust against feature noise

    Gait recognition in the wild using shadow silhouettes

    Get PDF
    Gait recognition systems allow identification of users relying on features acquired from their body movement while walking. This paper discusses the main factors affecting the gait features that can be acquired from a 2D video sequence, proposing a taxonomy to classify them across four dimensions. It also explores the possibility of obtaining users’ gait features from the shadow silhouettes by proposing a novel gait recognition system. The system includes novel methods for: (i) shadow segmentation, (ii) walking direction identification, and (iii) shadow silhouette rectification. The shadow segmentation is performed by fitting a line through the feet positions of the user obtained from the gait texture image (GTI). The direction of the fitted line is then used to identify the walking direction of the user. Finally, the shadow silhouettes thus obtained are rectified to compensate for the distortions and deformations resulting from the acquisition setup, using the proposed four-point correspondence method. The paper additionally presents a new database, consisting of 21 users moving along two walking directions, to test the proposed gait recognition system. Results show that the performance of the proposed system is equivalent to that of the state-of-the-art in a constrained setting, but performing equivalently well in the wild, where most state-of-the-art methods fail. The results also highlight the advantages of using rectified shadow silhouettes over body silhouettes under certain conditions.info:eu-repo/semantics/acceptedVersio

    Automatic annotation for weakly supervised learning of detectors

    Get PDF
    PhDObject detection in images and action detection in videos are among the most widely studied computer vision problems, with applications in consumer photography, surveillance, and automatic media tagging. Typically, these standard detectors are fully supervised, that is they require a large body of training data where the locations of the objects/actions in images/videos have been manually annotated. With the emergence of digital media, and the rise of high-speed internet, raw images and video are available for little to no cost. However, the manual annotation of object and action locations remains tedious, slow, and expensive. As a result there has been a great interest in training detectors with weak supervision where only the presence or absence of object/action in image/video is needed, not the location. This thesis presents approaches for weakly supervised learning of object/action detectors with a focus on automatically annotating object and action locations in images/videos using only binary weak labels indicating the presence or absence of object/action in images/videos. First, a framework for weakly supervised learning of object detectors in images is presented. In the proposed approach, a variation of multiple instance learning (MIL) technique for automatically annotating object locations in weakly labelled data is presented which, unlike existing approaches, uses inter-class and intra-class cue fusion to obtain the initial annotation. The initial annotation is then used to start an iterative process in which standard object detectors are used to refine the location annotation. Finally, to ensure that the iterative training of detectors do not drift from the object of interest, a scheme for detecting model drift is also presented. Furthermore, unlike most other methods, our weakly supervised approach is evaluated on data without manual pose (object orientation) annotation. Second, an analysis of the initial annotation of objects, using inter-class and intra-class cues, is carried out. From the analysis, a new method based on negative mining (NegMine) is presented for the initial annotation of both object and action data. The NegMine based approach is a much simpler formulation using only inter-class measure and requires no complex combinatorial optimisation but can still meet or outperform existing approaches including the previously pre3 sented inter-intra class cue fusion approach. Furthermore, NegMine can be fused with existing approaches to boost their performance. Finally, the thesis will take a step back and look at the use of generic object detectors as prior knowledge in weakly supervised learning of object detectors. These generic object detectors are typically based on sampling saliency maps that indicate if a pixel belongs to the background or foreground. A new approach to generating saliency maps is presented that, unlike existing approaches, looks beyond the current image of interest and into images similar to the current image. We show that our generic object proposal method can be used by itself to annotate the weakly labelled object data with surprisingly high accuracy

    Human shape modelling for carried object detection and segmentation

    Get PDF
    La dĂ©tection des objets transportĂ©s est un des prĂ©requis pour dĂ©velopper des systĂšmes qui cherchent Ă  comprendre les activitĂ©s impliquant des personnes et des objets. Cette thĂšse prĂ©sente de nouvelles mĂ©thodes pour dĂ©tecter et segmenter les objets transportĂ©s dans des vidĂ©os de surveillance. Les contributions sont divisĂ©es en trois principaux chapitres. Dans le premier chapitre, nous introduisons notre dĂ©tecteur d’objets transportĂ©s, qui nous permet de dĂ©tecter un type gĂ©nĂ©rique d’objets. Nous formulons la dĂ©tection d’objets transportĂ©s comme un problĂšme de classification de contours. Nous classifions le contour des objets mobiles en deux classes : objets transportĂ©s et personnes. Un masque de probabilitĂ©s est gĂ©nĂ©rĂ© pour le contour d’une personne basĂ© sur un ensemble d’exemplaires (ECE) de personnes qui marchent ou se tiennent debout de diffĂ©rents points de vue. Les contours qui ne correspondent pas au masque de probabilitĂ©s gĂ©nĂ©rĂ© sont considĂ©rĂ©s comme des candidats pour ĂȘtre des objets transportĂ©s. Ensuite, une rĂ©gion est assignĂ©e Ă  chaque objet transportĂ© en utilisant la Coupe BiaisĂ©e NormalisĂ©e (BNC) avec une probabilitĂ© obtenue par une fonction pondĂ©rĂ©e de son chevauchement avec l’hypothĂšse du masque de contours de la personne et du premier plan segmentĂ©. Finalement, les objets transportĂ©s sont dĂ©tectĂ©s en appliquant une Suppression des Non-Maxima (NMS) qui Ă©limine les scores trop bas pour les objets candidats. Le deuxiĂšme chapitre de contribution prĂ©sente une approche pour dĂ©tecter des objets transportĂ©s avec une mĂ©thode innovatrice pour extraire des caractĂ©ristiques des rĂ©gions d’avant-plan basĂ©e sur leurs contours locaux et l’information des super-pixels. Initiallement, un objet bougeant dans une sĂ©quence vidĂ©o est segmente en super-pixels sous plusieurs Ă©chelles. Ensuite, les rĂ©gions ressemblant Ă  des personnes dans l’avant-plan sont identifiĂ©es en utilisant un ensemble de caractĂ©ristiques extraites de super-pixels dans un codebook de formes locales. Ici, les rĂ©gions ressemblant Ă  des humains sont Ă©quivalentes au masque de probabilitĂ©s de la premiĂšre mĂ©thode (ECE). Notre deuxiĂšme dĂ©tecteur d’objets transportĂ©s bĂ©nĂ©ficie du nouveau descripteur de caractĂ©ristiques pour produire une carte de probabilitĂ© plus prĂ©cise. Les complĂ©ments des super-pixels correspondants aux rĂ©gions ressemblant Ă  des personnes dans l’avant-plan sont considĂ©rĂ©s comme une carte de probabilitĂ© des objets transportĂ©s. Finalement, chaque groupe de super-pixels voisins avec une haute probabilitĂ© d’objets transportĂ©s et qui ont un fort support de bordure sont fusionnĂ©s pour former un objet transportĂ©. Finalement, dans le troisiĂšme chapitre, nous prĂ©sentons une mĂ©thode pour dĂ©tecter et segmenter les objets transportĂ©s. La mĂ©thode proposĂ©e adopte le nouveau descripteur basĂ© sur les super-pixels pour iii identifier les rĂ©gions ressemblant Ă  des objets transportĂ©s en utilisant la modĂ©lisation de la forme humaine. En utilisant l’information spatio-temporelle des rĂ©gions candidates, la consistance des objets transportĂ©s rĂ©currents, vus dans le temps, est obtenue et sert Ă  dĂ©tecter les objets transportĂ©s. Enfin, les rĂ©gions d’objets transportĂ©s sont raffinĂ©es en intĂ©grant de l’information sur leur apparence et leur position Ă  travers le temps avec une extension spatio-temporelle de GrabCut. Cette Ă©tape finale sert Ă  segmenter avec prĂ©cision les objets transportĂ©s dans les sĂ©quences vidĂ©o. Nos mĂ©thodes sont complĂštement automatiques, et font des suppositions minimales sur les personnes, les objets transportĂ©s, et les les sĂ©quences vidĂ©o. Nous Ă©valuons les mĂ©thodes dĂ©crites en utilisant deux ensembles de donnĂ©es, PETS 2006 et i-Lids AVSS. Nous Ă©valuons notre dĂ©tecteur et nos mĂ©thodes de segmentation en les comparant avec l’état de l’art. L’évaluation expĂ©rimentale sur les deux ensembles de donnĂ©es dĂ©montre que notre dĂ©tecteur d’objets transportĂ©s et nos mĂ©thodes de segmentation surpassent de façon significative les algorithmes compĂ©titeurs.Detecting carried objects is one of the requirements for developing systems that reason about activities involving people and objects. This thesis presents novel methods to detect and segment carried objects in surveillance videos. The contributions are divided into three main chapters. In the first, we introduce our carried object detector which allows to detect a generic class of objects. We formulate carried object detection in terms of a contour classification problem. We classify moving object contours into two classes: carried object and person. A probability mask for person’s contours is generated based on an ensemble of contour exemplars (ECE) of walking/standing humans in different viewing directions. Contours that are not falling in the generated hypothesis mask are considered as candidates for carried object contours. Then, a region is assigned to each carried object candidate contour using Biased Normalized Cut (BNC) with a probability obtained by a weighted function of its overlap with the person’s contour hypothesis mask and segmented foreground. Finally, carried objects are detected by applying a Non-Maximum Suppression (NMS) method which eliminates the low score carried object candidates. The second contribution presents an approach to detect carried objects with an innovative method for extracting features from foreground regions based on their local contours and superpixel information. Initially, a moving object in a video frame is segmented into multi-scale superpixels. Then human-like regions in the foreground area are identified by matching a set of extracted features from superpixels against a codebook of local shapes. Here the definition of human like regions is equivalent to a person’s probability map in our first proposed method (ECE). Our second carried object detector benefits from the novel feature descriptor to produce a more accurate probability map. Complement of the matching probabilities of superpixels to human-like regions in the foreground are considered as a carried object probability map. At the end, each group of neighboring superpixels with a high carried object probability which has strong edge support is merged to form a carried object. Finally, in the third contribution we present a method to detect and segment carried objects. The proposed method adopts the new superpixel-based descriptor to identify carried object-like candidate regions using human shape modeling. Using spatio-temporal information of the candidate regions, consistency of recurring carried object candidates viewed over time is obtained and serves to detect carried objects. Last, the detected carried object regions are refined by integrating information of their appearances and their locations over time with a spatio-temporal extension of GrabCut. This final stage is used to accurately segment carried objects in frames. Our methods are fully automatic, and make minimal assumptions about a person, carried objects and videos. We evaluate the aforementioned methods using two available datasets PETS 2006 and i-Lids AVSS. We compare our detector and segmentation methods against a state-of-the-art detector. Experimental evaluation on the two datasets demonstrates that both our carried object detection and segmentation methods significantly outperform competing algorithms
    • 

    corecore