4,856 research outputs found

    Circulant temporal encoding for video retrieval and temporal alignment

    Get PDF
    We address the problem of specific video event retrieval. Given a query video of a specific event, e.g., a concert of Madonna, the goal is to retrieve other videos of the same event that temporally overlap with the query. Our approach encodes the frame descriptors of a video to jointly represent their appearance and temporal order. It exploits the properties of circulant matrices to efficiently compare the videos in the frequency domain. This offers a significant gain in complexity and accurately localizes the matching parts of videos. The descriptors can be compressed in the frequency domain with a product quantizer adapted to complex numbers. In this case, video retrieval is performed without decompressing the descriptors. We also consider the temporal alignment of a set of videos. We exploit the matching confidence and an estimate of the temporal offset computed for all pairs of videos by our retrieval approach. Our robust algorithm aligns the videos on a global timeline by maximizing the set of temporally consistent matches. The global temporal alignment enables synchronous playback of the videos of a given scene

    Reconstrução de filogenias para imagens e vídeos

    Get PDF
    Orientadores: Anderson de Rezende Rocha, Zanoni DiasTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Com o advento das redes sociais, documentos digitais (e.g., imagens e vídeos) se tornaram poderosas ferramentas de comunicação. Dada esta nova realidade, é comum esses documentos serem publicados, compartilhados, modificados e republicados por vários usuários em diferentes canais da Web. Além disso, com a popularização de programas de edição de imagens e vídeos, muitas vezes não somente cópias exatas de documentos estão disponíveis, mas, também, versões modificadas das fontes originais (duplicatas próximas). Entretanto, o compartilhamento de documentos facilita a disseminação de conteúdo abusivo (e.g., pornografia infantil), que não respeitam direitos autorais e, em alguns casos, conteúdo difamatório, afetando negativamente a imagem pública de pessoas ou corporações (e.g., imagens difamatórias de políticos ou celebridades, pessoas em situações constrangedoras, etc.). Muitos pesquisadores têm desenvolvido, com sucesso, abordagens para detecção de duplicatas de documentos com o intuito de identificar cópias semelhantes de um dado documento multimídia (e.g., imagem, vídeo, etc.) publicado na Internet. Entretanto, somente recentemente têm se desenvolvido as primeiras pesquisas para ir além da detecção de duplicatas e encontrar a estrutura de evolução de um conjunto de documentos relacionados e modificados ao longo do tempo. Para isso, é necessário o desenvolvimento de abordagens que calculem a dissimilaridade entre duplicatas e as separem corretamente em estruturas que representem a relação entre elas de forma automática. Este problema é denominado na literatura como Reconstrução de Filogenia de Documentos Multimídia. Pesquisas na área de filogenia de documentos multimídia são importantes para auxiliar na resolução de problemas como, por exemplo, análise forense, recuperação de imagens por conteúdo e rastreamento de conteúdo ilegal. Nesta tese de doutorado, apresentamos abordagens desenvolvidas para solucionar o problema de filogenias para imagens e vídeos digitais. Considerando imagens, propomos novas abordagens para tratar o problema de filogenia considerando dois pontos principais: (i) a reconstrução de florestas, importante em cenários onde se tem um conjunto de imagens semanticamente semelhantes, mas geradas por fontes ou em momentos diferentes no tempo; e (ii) novas medidas para o cálculo de dissimilaridade entre as duplicatas, uma vez que esse cálculo afeta diretamente a qualidade de reconstrução da filogenia. Os resultados obtidos com as soluções para filogenia de imagens apresentadas neste trabalho confirmam a efetividade das abordagens propostas, identificando corretamente as raízes das florestas (imagens originais de uma sequencia de evolução) com até 95% de acurácia. Para filogenia de vídeos, propomos novas abordagens que realizam alinhamento temporal nos vídeos antes de se calcular a dissimilaridade, uma vez que, em cenários reais, os vídeos podem estar desalinhados temporalmente, terem sofrido recorte temporal ou serem comprimidos, por exemplo. Nesse contexto, nossas abordagens conseguem identificar a raiz das árvores com acurácia de até 87%Abstract: Digital documents (e.g., images and videos) have become powerful tools of communication with the advent of social networks. Within this new reality, it is very common these documents to be published, shared, modified and often republished by multiple users on different web channels. Additionally, with the popularization of image editing software and online editor tools, in most of the cases, not only their exact duplicates will be available, but also manipulated versions of the original source (near duplicates). Nevertheless, this document sharing facilitates the spread of abusive content (e.g., child pornography), copyright infringement and, in some cases, defamatory content, adversely affecting the public image of people or corporations (e.g., defamatory images of politicians and celebrities, people in embarrassing situations, etc.). Several researchers have successfully developed approaches for the detection and recognition of near-duplicate documents, aiming at identifying similar copies of a given multimedia document (e.g., image, video, etc.) published on the Internet. Notwithstanding, only recently some researches have developed approaches that go beyond the near-duplicate detection task and aim at finding the ancestral relationship between the near duplicates and the original source of a document. For this, the development of approaches for calculating the dissimilarity between near duplicates and correctly reconstruct structures that represent the relationship between them automatically is required. This problem is referred to in the literature as Multimedia Phylogeny. Solutions for multimedia phylogeny can help researchers to solve problems in forensics, content-based document retrieval and illegal-content document tracking, for instance. In this thesis, we designed and developed approaches to solve the phylogeny reconstruction problem for digital images and videos. Considering images, we proposed approaches to deal with the phylogeny problem considering two main points: (i) the forest reconstruction, an important task when we consider scenarios in which there is a set of semantically similar images, but generated by different sources or at different times; and (ii) new measures for dissimilarity calculation between near-duplicates, given that the dissimilarity calculation directly impacts the quality of the phylogeny reconstruction. The results obtained with our approaches for image phylogeny showed effective, identifying the root of the forests (original images of an evolution sequence) with accuracy up to 95%. For video phylogeny, we developed a new approach for temporal alignment in the video sequences before calculating the dissimilarity between them, once that, in real-world conditions, a pair of videos can be temporally misaligned, one video can have some frames removed and video compression can be applied, for example. For such problem, the proposed methods yield up to 87% correct of accuracy for finding the roots of the treesDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2013/05815-2FAPESPCAPE

    Identification, synchronisation and composition of user-generated videos

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i Queen Mary University of LondonThe increasing availability of smartphones is facilitating people to capture videos of their experience when attending events such as concerts, sports competitions and public rallies. Smartphones are equipped with inertial sensors which could be beneficial for event understanding. The captured User-Generated Videos (UGVs) are made available on media sharing websites. Searching and mining of UGVs of the same event are challenging due to inconsistent tags or incorrect timestamps. A UGV recorded from a fixed location contains monotonic content and unintentional camera motions, which may make it less interesting to playback. In this thesis, we propose the following identification, synchronisation and video composition frameworks for UGVs. We propose a framework for the automatic identification and synchronisation of unedited multi-camera UGVs within a database. The proposed framework analyses the sound to match and cluster UGVs that capture the same spatio-temporal event, and estimate their relative time-shift to temporally align them. We design a novel descriptor derived from the pairwise matching of audio chroma features of UGVs. The descriptor facilitates the definition of a classification threshold for automatic query-by-example event identification. We contribute a database of 263 multi-camera UGVs of 48 real-world events. We evaluate the proposed framework on this database and compare it with state-of-the-art methods. Experimental results show the effectiveness of the proposed approach in the presence of audio degradations (channel noise, ambient noise, reverberations). Moreover, we present an automatic audio and visual-based camera selection framework for composing uninterrupted recording from synchronised multi-camera UGVs of the same event. We design an automatic audio-based cut-point selection method that provides a common reference for audio and video segmentation. To filter low quality video segments, spatial and spatio-temporal assessments are computed. The framework combines segments of UGVs using a rank-based camera selection strategy by considering visual quality scores and view diversity. The proposed framework is validated on a dataset of 13 events (93~UGVs) through subjective tests and compared with state-of-the-art methods. Suitable cut-point selection, specific visual quality assessments and rank-based camera selection contribute to the superiority of the proposed framework over the existing methods. Finally, we contribute a method for Camera Motion Detection using Gyroscope for UGVs captured from smartphones and design a gyro-based quality score for video composition. The gyroscope measures the angular velocity of the smartphone that can be use for camera motion analysis. We evaluate the proposed camera motion detection method on a dataset of 24 multi-modal UGVs captured by us, and compare it with existing visual and inertial sensor-based methods. By designing a gyro-based score to quantify the goodness of the multi-camera UGVs, we develop a gyro-based video composition framework. A gyro-based score substitutes the spatial and spatio-temporal scores and reduces the computational complexity. We contribute a multi-modal dataset of 3 events (12~UGVs), which is used to validate the proposed gyro-based video composition framework.El incremento de la disponibilidad de teléfonos inteligentes o smartphones posibilita a la gente capturar videos de sus experiencias cuando asisten a eventos así como como conciertos, competiciones deportivas o mítines públicos. Los Videos Generados por Usuarios (UGVs) pueden estar disponibles en sitios web públicos especializados en compartir archivos. La búsqueda y la minería de datos de los UGVs del mismo evento son un reto debido a que los etiquetajes son incoherentes o las marcas de tiempo erróneas. Por otra parte, un UGV grabado desde una ubicación fija, contiene información monótona y movimientos de cámara no intencionados haciendo menos interesante su reproducción. En esta tesis, se propone una identificación, sincronización y composición de tramas de vídeo para UGVs. Se ha propuesto un sistema para la identificación y sincronización automática de UGVs no editados provenientes de diferentes cámaras dentro de una base de datos. El sistema propuesto analiza el sonido con el fin de hacerlo coincidir e integrar UGVs que capturan el mismo evento en el espacio y en el tiempo, estimando sus respectivos desfases temporales y alinearlos en el tiempo. Se ha diseñado un nuevo descriptor a partir de la coincidencia por parejas de características de la croma del audio de los UGVs. Este descriptor facilita la determinación de una clasificación por umbral para una identificación de eventos automática basada en búsqueda mediante ejemplo (en inglés, query by example). Se ha contribuido con una base de datos de 263 multi-cámaras UGVs de un total de 48 eventos reales. Se ha evaluado la trama propuesta en esta base de datos y se ha comparado con los métodos elaborados en el estado del arte. Los resultados experimentales muestran la efectividad del enfoque propuesto con la presencia alteraciones en el audio. Además, se ha presentado una selección automática de tramas en base a la reproducción de video y audio componiendo una grabación ininterrumpida de multi-cámaras UGVs sincronizadas en el mismo evento. También se ha diseñado un método de selección de puntos de corte automático basado en audio que proporciona una referencia común para la segmentación de audio y video. Con el fin de filtrar segmentos de videos de baja calidad, se han calculado algunas medidas espaciales y espacio-temporales. El sistema combina segmentos de UGVs empleando una estrategia de selección de cámaras basadas en la evaluación a través de un ranking considerando puntuaciones de calidad visuales y diversidad de visión. El sistema propuesto se ha validado con un conjunto de datos de 13 eventos (93 UGVs) a través de pruebas subjetivas y se han comparado con los métodos elaborados en el estado del arte. La selección de puntos de corte adecuados, evaluaciones de calidad visual específicas y la selección de cámara basada en ranking contribuyen en la mejoría de calidad del sistema propuesto respecto a otros métodos existentes. Finalmente, se ha realizado un método para la Detección de Movimiento de Cámara usando giróscopos para las UGVs capturadas desde smartphones y se ha diseñado un método de puntuación de calidad basada en el giro. El método de detección de movimiento de la cámara con una base de datos de 24 UGVs multi-modales y se ha comparado con los métodos actuales basados en visión y sistemas inerciales. A través del diseño de puntuación para cuantificar con el giróscopo cuán bien funcionan los sistemas de UGVs con multi-cámara, se ha desarrollado un sistema de composición de video basada en el movimiento del giroscopio. Este sistema basado en la puntuación a través del giróscopo sustituye a los sistemas de puntuaciones basados en parámetros espacio-temporales reduciendo la complejidad computacional. Además, se ha contribuido con un conjunto de datos de 3 eventos (12 UGVs), que se han empleado para validar los sistemas de composición de video basados en giróscopo.Postprint (published version

    VADER: Video Alignment Differencing and Retrieval

    Full text link
    We propose VADER, a spatio-temporal matching, alignment, and change summarization method to help fight misinformation spread via manipulated videos. VADER matches and coarsely aligns partial video fragments to candidate videos using a robust visual descriptor and scalable search over adaptively chunked video content. A transformer-based alignment module then refines the temporal localization of the query fragment within the matched video. A space-time comparator module identifies regions of manipulation between aligned content, invariant to any changes due to any residual temporal misalignments or artifacts arising from non-editorial changes of the content. Robustly matching video to a trusted source enables conclusions to be drawn on video provenance, enabling informed trust decisions on content encountered

    From Multiview Image Curves to 3D Drawings

    Full text link
    Reconstructing 3D scenes from multiple views has made impressive strides in recent years, chiefly by correlating isolated feature points, intensity patterns, or curvilinear structures. In the general setting - without controlled acquisition, abundant texture, curves and surfaces following specific models or limiting scene complexity - most methods produce unorganized point clouds, meshes, or voxel representations, with some exceptions producing unorganized clouds of 3D curve fragments. Ideally, many applications require structured representations of curves, surfaces and their spatial relationships. This paper presents a step in this direction by formulating an approach that combines 2D image curves into a collection of 3D curves, with topological connectivity between them represented as a 3D graph. This results in a 3D drawing, which is complementary to surface representations in the same sense as a 3D scaffold complements a tent taut over it. We evaluate our results against truth on synthetic and real datasets.Comment: Expanded ECCV 2016 version with tweaked figures and including an overview of the supplementary material available at multiview-3d-drawing.sourceforge.ne
    corecore