1,367 research outputs found

    Representation of Samba dance gestures, using a multi-modal analysis approach

    Get PDF
    In this paper we propose an approach for the representation of dance gestures in Samba dance. This representation is based on a video analysis of body movements, carried out from the viewpoint of the musical meter. Our method provides the periods, a measure of energy and a visual representation of periodic movement in dance. The method is applied to a limited universe of Samba dances and music, which is used to illustrate the usefulness of the approach

    Fast, collaborative acquisition of multi-view face images using a camera network and its impact on real-time human identification

    Get PDF
    Biometric systems have been typically designed to operate under controlled environments based on previously acquired photographs and videos. But recent terror attacks, security threats and intrusion attempts have necessitated a transition to modern biometric systems that can identify humans in real-time under unconstrained environments. Distributed camera networks are appropriate for unconstrained scenarios because they can provide multiple views of a scene, thus offering tolerance against variable pose of a human subject and possible occlusions. In dynamic environments, the face images are continually arriving at the base station with different quality, pose and resolution. Designing a fusion strategy poses significant challenges. Such a scenario demands that only the relevant information is processed and the verdict (match / no match) regarding a particular subject is quickly (yet accurately) released so that more number of subjects in the scene can be evaluated.;To address these, we designed a wireless data acquisition system that is capable of acquiring multi-view faces accurately and at a rapid rate. The idea of epipolar geometry is exploited to get high multi-view face detection rates. Face images are labeled to their corresponding poses and are transmitted to the base station. To evaluate the impact of face images acquired using our real-time face image acquisition system on the overall recognition accuracy, we interface it with a face matching subsystem and thus create a prototype real-time multi-view face recognition system. For front face matching, we use the commercial PittPatt software. For non-frontal matching, we use a Local binary Pattern based classifier. Matching scores obtained from both frontal and non-frontal face images are fused for final classification. Our results show significant improvement in recognition accuracy, especially when the front face images are of low resolution

    Audio-visual foreground extraction for event characterization

    Get PDF
    This paper presents a new method able to integrate audio and visual information for scene analysis in a typical surveillance scenario, using only one camera and one monaural microphone. Visual information is analyzed by a standard visual background/foreground (BG/FG) modelling module, enhanced with a novelty detection stage, and coupled with an audio BG/FG modelling scheme. The audiovisual association is performed on-line, by exploiting the concept of synchrony. Experimental tests carrying out classification and clustering of events show all the potentialities of the proposed approach, also in comparison with the results obtained by using the single modalities

    Spatial and temporal background modelling of non-stationary visual scenes

    Get PDF
    PhDThe prevalence of electronic imaging systems in everyday life has become increasingly apparent in recent years. Applications are to be found in medical scanning, automated manufacture, and perhaps most significantly, surveillance. Metropolitan areas, shopping malls, and road traffic management all employ and benefit from an unprecedented quantity of video cameras for monitoring purposes. But the high cost and limited effectiveness of employing humans as the final link in the monitoring chain has driven scientists to seek solutions based on machine vision techniques. Whilst the field of machine vision has enjoyed consistent rapid development in the last 20 years, some of the most fundamental issues still remain to be solved in a satisfactory manner. Central to a great many vision applications is the concept of segmentation, and in particular, most practical systems perform background subtraction as one of the first stages of video processing. This involves separation of ‘interesting foreground’ from the less informative but persistent background. But the definition of what is ‘interesting’ is somewhat subjective, and liable to be application specific. Furthermore, the background may be interpreted as including the visual appearance of normal activity of any agents present in the scene, human or otherwise. Thus a background model might be called upon to absorb lighting changes, moving trees and foliage, or normal traffic flow and pedestrian activity, in order to effect what might be termed in ‘biologically-inspired’ vision as pre-attentive selection. This challenge is one of the Holy Grails of the computer vision field, and consequently the subject has received considerable attention. This thesis sets out to address some of the limitations of contemporary methods of background segmentation by investigating methods of inducing local mutual support amongst pixels in three starkly contrasting paradigms: (1) locality in the spatial domain, (2) locality in the shortterm time domain, and (3) locality in the domain of cyclic repetition frequency. Conventional per pixel models, such as those based on Gaussian Mixture Models, offer no spatial support between adjacent pixels at all. At the other extreme, eigenspace models impose a structure in which every image pixel bears the same relation to every other pixel. But Markov Random Fields permit definition of arbitrary local cliques by construction of a suitable graph, and 3 are used here to facilitate a novel structure capable of exploiting probabilistic local cooccurrence of adjacent Local Binary Patterns. The result is a method exhibiting strong sensitivity to multiple learned local pattern hypotheses, whilst relying solely on monochrome image data. Many background models enforce temporal consistency constraints on a pixel in attempt to confirm background membership before being accepted as part of the model, and typically some control over this process is exercised by a learning rate parameter. But in busy scenes, a true background pixel may be visible for a relatively small fraction of the time and in a temporally fragmented fashion, thus hindering such background acquisition. However, support in terms of temporal locality may still be achieved by using Combinatorial Optimization to derive shortterm background estimates which induce a similar consistency, but are considerably more robust to disturbance. A novel technique is presented here in which the short-term estimates act as ‘pre-filtered’ data from which a far more compact eigen-background may be constructed. Many scenes entail elements exhibiting repetitive periodic behaviour. Some road junctions employing traffic signals are among these, yet little is to be found amongst the literature regarding the explicit modelling of such periodic processes in a scene. Previous work focussing on gait recognition has demonstrated approaches based on recurrence of self-similarity by which local periodicity may be identified. The present work harnesses and extends this method in order to characterize scenes displaying multiple distinct periodicities by building a spatio-temporal model. The model may then be used to highlight abnormality in scene activity. Furthermore, a Phase Locked Loop technique with a novel phase detector is detailed, enabling such a model to maintain correct synchronization with scene activity in spite of noise and drift of periodicity. This thesis contends that these three approaches are all manifestations of the same broad underlying concept: local support in each of the space, time and frequency domains, and furthermore, that the support can be harnessed practically, as will be demonstrated experimentally

    Anomaly detection in moving-camera videos with sparse and low-rank matrix decompositions

    Get PDF
    This work presents two methods based on sparse decompositions that can detect anomalies in video sequences obtained from moving cameras. The first method starts by computing the union of subspaces (UoS) that best represents all the frames from a reference (anomaly-free) video as a low-rank projection plus a sparse residue. Then it performs a low-rank representation of the target (possibly anomalous) video by taking advantage of both the UoS and the sparse residue computed from the reference video. The anomalies are extracted after post-processing this video with these residual data. Such algorithm provides good detection results while at the same time obviating the need for previous video synchronization. However, this technique looses its detection efficiency when target and reference videos presents more severe misalignments. This may happen due to small uncontrolled camera moviment and shaking during the acquisition phase, which is often common in realworld situations. To extend its applicability, a second contribution is proposed in order to cope with these possible pose misalignments. This is done by modeling the target-reference pose discrepancy as geometric transformations acting on the domain of frames of the target video. A complete matrix decomposition algorithm is presented in order to perform a sparse representation of the target video as a sparse combination of the reference video plus a sparse residue, while taking into account the transformation acting on it. Our method is then verified and compared against state-of-the-art techniques using a challenging video dataset, that comprises recordings presenting the described misalignments. Under the evaluation metrics used, the second proposed method exhibits an improvement of at least 16% over the first proposed one, and 22% over the next best rated method.Apresentamos dois métodos baseados em decomposições esparsas que podem detectar anomalias em sequências de vídeo obtidas por câmeras em movimento. O primeiro método estima a união de subespaços (UdS) que melhor representa todos os quadros de um vídeo de referência (livre de anomalias) como uma projeção de baixo-posto mais um resíduo esparso. Em seguida, é realizada uma representação de baixo-posto do vídeo alvo (possivelmente anômalo) aproveitando a UdS e o resíduo esparso calculado a partir do vídeo de referência. As anomalias são extraídas após o pós-processamento destas informações residuais. Esse algoritmo fornece bons resultados de detecção, além de eliminar a necessidade de uma sincronização prévia dos vídeos. No entanto, essa técnica perde eficiência quando os vídeos de referência e alvo apresentam desalinhamentos mais graves entre si. Isso pode ocorrer devido a pequenos movimentos descontrolados da câmera e tremores durante a fase de aquisição. Para estender sua aplicabilidade, uma segunda contribuição é proposta a fim de lidar com esse possível desalinhamento. Isso é feito modelando a discrepância de pose de câmera entre os vídeos de referência e alvo com transformações geométricas agindo no domínio dos quadros do vídeo alvo. Um algoritmo completo de decomposição de matrizes é apresentado para realizar uma representação esparsa do vídeo alvo como uma combinação esparsa do vídeo de referência, levando em consideração as transformações que atuam sobre seus quadros. Nosso método é então verificado e comparado com técnicas de última geração com auxílio de vídeos de uma base desafiadora, apresentando os desalinhamentos em questão. Sob as métricas de avaliação usadas, o segundo método proposto exibe uma melhoria de pelo menos 16% em relação ao primeiro, e 22% sobre o método melhor avaliado logo em seguida
    • …
    corecore