3 research outputs found

    Spatial Pyramid Context-Aware Moving Object Detection and Tracking for Full Motion Video and Wide Aerial Motion Imagery

    Get PDF
    A robust and fast automatic moving object detection and tracking system is essential to characterize target object and extract spatial and temporal information for different functionalities including video surveillance systems, urban traffic monitoring and navigation, robotic. In this dissertation, I present a collaborative Spatial Pyramid Context-aware moving object detection and Tracking system. The proposed visual tracker is composed of one master tracker that usually relies on visual object features and two auxiliary trackers based on object temporal motion information that will be called dynamically to assist master tracker. SPCT utilizes image spatial context at different level to make the video tracking system resistant to occlusion, background noise and improve target localization accuracy and robustness. We chose a pre-selected seven-channel complementary features including RGB color, intensity and spatial pyramid of HoG to encode object color, shape and spatial layout information. We exploit integral histogram as building block to meet the demands of real-time performance. A novel fast algorithm is presented to accurately evaluate spatially weighted local histograms in constant time complexity using an extension of the integral histogram method. Different techniques are explored to efficiently compute integral histogram on GPU architecture and applied for fast spatio-temporal median computations and 3D face reconstruction texturing. We proposed a multi-component framework based on semantic fusion of motion information with projected building footprint map to significantly reduce the false alarm rate in urban scenes with many tall structures. The experiments on extensive VOTC2016 benchmark dataset and aerial video confirm that combining complementary tracking cues in an intelligent fusion framework enables persistent tracking for Full Motion Video and Wide Aerial Motion Imagery.Comment: PhD Dissertation (162 pages

    Detecci贸n de situaciones de violencia f铆sica interpersonal en videos usando t茅cnicas de aprendizaje profundo

    Get PDF
    Dise帽a una arquitectura con el modelo de red neuronal convolucional Xception y LSTM para la detecci贸n de violencia f铆sica interpersonal en los videos de sistemas de vigilancia. Debido al aumento de inseguridad en el pa铆s y como medida preventiva, se busc贸 reforzar el sistema de videovigilancia, donde se enfoc贸 en la necesidad de integrar nuevas tecnolog铆as para supervisar la seguridad ciudadana como es el caso del uso de la visi贸n artificial. Para el entrenamiento, validaci贸n y prueba de la arquitectura del modelo propuesto, se utiliz贸 los conjuntos de datos Hockey Fight Dataset y Real Life Violence Situations Dataset. Los resultados obtenidos en la exactitud de nuestra propuesta en el conjunto de datos Hockey Fight Dataset supero a todos los dem谩s m茅todos. En el caso del conjunto de datos Real Life Violence Situations Dataset que cuenta 2000 videos en contraste de otros conjuntos de datos utilizados para la detecci贸n de violencia, se obtuvieron buenos resultados en la exactitud mayores al 90%.Per煤. Universidad Nacional Mayor de San Marcos. Vicerrectorado de Investigaci贸n y Posgrado. Proyectos de Investigaci贸n con Financiamiento para Grupos de Investigaci贸n. PCONFIGI. C贸digo: C21201361. Resoluci贸n: 005753-2021-R/UNMS
    corecore