9 research outputs found

    Open-ended visual question answering

    Get PDF
    Wearable cameras generate a large amount of photos which are, in many cases, useless or redundant. On the other hand, these devices are provide an excellent opportunity to create automatic questions and answers for reminiscence therapy. This is a follow up of the BSc thesis developed by Ricard Mestre during Fall 2014, and MSc thesis developed by Aniol Lidon.This thesis studies methods to solve Visual Question-Answering (VQA) tasks with a Deep Learning framework. As a preliminary step, we explore Long Short-Term Memory (LSTM) networks used in Natural Language Processing (NLP) to tackle Question-Answering (text based). We then modify the previous model to accept an image as an input in addition to the question. For this purpose, we explore the VGG-16 and K-CNN convolutional neural networks to extract visual features from the image. These are merged with the word embedding or with a sentence embedding of the question to predict the answer. This work was successfully submitted to the Visual Question Answering Challenge 2016, where it achieved a 53,62% of accuracy in the test dataset. The developed software has followed the best programming practices and Python code style, providing a consistent baseline in Keras for different configurations. The source code and models are publicly available at https://github.com/imatge-upc/vqa-2016-cvprw.Esta tesis estudia métodos para resolver tareas de Visual Question-Answering usando técnicas de Deep Learning. Como primer paso, exploramos las redes Long Short-Term Memory (LST) que se usan en el Procesado del Lenguaje Natural (NLP) para atacar tareas de Question-Answering basadas únicamente en texto. A continuación modificamos el modelo anterior para aceptar una imagen como entrada junto con la pregunta. Para este propósito, estudiamos el uso de las redes convolucionales VGG-16 y K-CNN para extraer los descriptores visuales de la imagen. Estos descriptores son fusionados con el word embedding o sentence embedding de la pregunta para poder predecir la respuesta. Este trabajo se ha presentado al Visual Question Answering Challenge 2016, donde ha obtenido una precisión del 53,62% en los datos de test. El software desarrollado ha usado buenas prácticas de programación y ha seguido las directrices de estilo de Python, proveyendo un proyecto base en Keras consistente a distintas configuraciones. El código fuente y los modelos son públicos en https://github.com/imatge-upc/ vqa-2016-cvprw.Aquesta tesis estudia mètodes per resoldre tasques de Visual Question-Answering emprant tècniques de Deep Learning. Com a pas preliminar, explorem les xarxes Long Short-Term Memory (LSTM) que s'utilitzen en el Processat del Llenguatge Natural (NLP) per atacar tasques de Question-Answering basades únicament en text. A continuació modifiquem el model anterior per acceptar una imatge com a entrada juntament amb la pregunta. Per aquest propòsit, estudiem l'ús de les xarxes convolucionals VGG-16 i KCNN per tal d'extreure els descriptors visuals de la imatge. Aquests descriptors són fusionats amb el word embedding o sentence embedding de la pregunta per poder predir la resposta. Aquest treball ha estat presentat al Visual Question Answering Challenge 2016, on ha obtingut una precisió del 53,62% en les dades de test. El software desenvolupat ha emprat bones pràctiques en programació i ha seguit les directrius d'estil de Python, prove ïnt un projecte base en Keras consistent a diferents configuracions. El codi font i els models són públics a https://github.com/imatge-upc/vqa-2016-cvprw

    ViTS: Video tagging system from massive web multimedia collections

    Get PDF
    The popularization of multimedia content on the Web has arised the need to automatically understand, index and retrieve it. In this paper we present ViTS, an automatic Video Tagging System which learns from videos, their web context and comments shared on social networks. ViTS analyses massive multimedia collections by Internet crawling, and maintains a knowledge base that updates in real time with no need of human supervision. As a result, each video is indexed with a rich set of labels and linked with other related contents. ViTS is an industrial product under exploitation with a vocabulary of over 2.5M concepts, capable of indexing more than 150k videos per month. We compare the quality and completeness of our tags with respect to the ones in the YouTube-8M dataset, and we show how ViTS enhances the semantic annotation of the videos with a larger number of labels (10.04 tags/video), with an accuracy of 80,87%.Postprint (published version

    Open-ended visual question answering

    No full text
    Wearable cameras generate a large amount of photos which are, in many cases, useless or redundant. On the other hand, these devices are provide an excellent opportunity to create automatic questions and answers for reminiscence therapy. This is a follow up of the BSc thesis developed by Ricard Mestre during Fall 2014, and MSc thesis developed by Aniol Lidon.This thesis studies methods to solve Visual Question-Answering (VQA) tasks with a Deep Learning framework. As a preliminary step, we explore Long Short-Term Memory (LSTM) networks used in Natural Language Processing (NLP) to tackle Question-Answering (text based). We then modify the previous model to accept an image as an input in addition to the question. For this purpose, we explore the VGG-16 and K-CNN convolutional neural networks to extract visual features from the image. These are merged with the word embedding or with a sentence embedding of the question to predict the answer. This work was successfully submitted to the Visual Question Answering Challenge 2016, where it achieved a 53,62% of accuracy in the test dataset. The developed software has followed the best programming practices and Python code style, providing a consistent baseline in Keras for different configurations. The source code and models are publicly available at https://github.com/imatge-upc/vqa-2016-cvprw.Esta tesis estudia métodos para resolver tareas de Visual Question-Answering usando técnicas de Deep Learning. Como primer paso, exploramos las redes Long Short-Term Memory (LST) que se usan en el Procesado del Lenguaje Natural (NLP) para atacar tareas de Question-Answering basadas únicamente en texto. A continuación modificamos el modelo anterior para aceptar una imagen como entrada junto con la pregunta. Para este propósito, estudiamos el uso de las redes convolucionales VGG-16 y K-CNN para extraer los descriptores visuales de la imagen. Estos descriptores son fusionados con el word embedding o sentence embedding de la pregunta para poder predecir la respuesta. Este trabajo se ha presentado al Visual Question Answering Challenge 2016, donde ha obtenido una precisión del 53,62% en los datos de test. El software desarrollado ha usado buenas prácticas de programación y ha seguido las directrices de estilo de Python, proveyendo un proyecto base en Keras consistente a distintas configuraciones. El código fuente y los modelos son públicos en https://github.com/imatge-upc/ vqa-2016-cvprw.Aquesta tesis estudia mètodes per resoldre tasques de Visual Question-Answering emprant tècniques de Deep Learning. Com a pas preliminar, explorem les xarxes Long Short-Term Memory (LSTM) que s'utilitzen en el Processat del Llenguatge Natural (NLP) per atacar tasques de Question-Answering basades únicament en text. A continuació modifiquem el model anterior per acceptar una imatge com a entrada juntament amb la pregunta. Per aquest propòsit, estudiem l'ús de les xarxes convolucionals VGG-16 i KCNN per tal d'extreure els descriptors visuals de la imatge. Aquests descriptors són fusionats amb el word embedding o sentence embedding de la pregunta per poder predir la resposta. Aquest treball ha estat presentat al Visual Question Answering Challenge 2016, on ha obtingut una precisió del 53,62% en les dades de test. El software desenvolupat ha emprat bones pràctiques en programació i ha seguit les directrius d'estil de Python, prove ïnt un projecte base en Keras consistent a diferents configuracions. El codi font i els models són públics a https://github.com/imatge-upc/vqa-2016-cvprw

    Open-ended visual question answering

    Get PDF
    Wearable cameras generate a large amount of photos which are, in many cases, useless or redundant. On the other hand, these devices are provide an excellent opportunity to create automatic questions and answers for reminiscence therapy. This is a follow up of the BSc thesis developed by Ricard Mestre during Fall 2014, and MSc thesis developed by Aniol Lidon.This thesis studies methods to solve Visual Question-Answering (VQA) tasks with a Deep Learning framework. As a preliminary step, we explore Long Short-Term Memory (LSTM) networks used in Natural Language Processing (NLP) to tackle Question-Answering (text based). We then modify the previous model to accept an image as an input in addition to the question. For this purpose, we explore the VGG-16 and K-CNN convolutional neural networks to extract visual features from the image. These are merged with the word embedding or with a sentence embedding of the question to predict the answer. This work was successfully submitted to the Visual Question Answering Challenge 2016, where it achieved a 53,62% of accuracy in the test dataset. The developed software has followed the best programming practices and Python code style, providing a consistent baseline in Keras for different configurations. The source code and models are publicly available at https://github.com/imatge-upc/vqa-2016-cvprw.Esta tesis estudia métodos para resolver tareas de Visual Question-Answering usando técnicas de Deep Learning. Como primer paso, exploramos las redes Long Short-Term Memory (LST) que se usan en el Procesado del Lenguaje Natural (NLP) para atacar tareas de Question-Answering basadas únicamente en texto. A continuación modificamos el modelo anterior para aceptar una imagen como entrada junto con la pregunta. Para este propósito, estudiamos el uso de las redes convolucionales VGG-16 y K-CNN para extraer los descriptores visuales de la imagen. Estos descriptores son fusionados con el word embedding o sentence embedding de la pregunta para poder predecir la respuesta. Este trabajo se ha presentado al Visual Question Answering Challenge 2016, donde ha obtenido una precisión del 53,62% en los datos de test. El software desarrollado ha usado buenas prácticas de programación y ha seguido las directrices de estilo de Python, proveyendo un proyecto base en Keras consistente a distintas configuraciones. El código fuente y los modelos son públicos en https://github.com/imatge-upc/ vqa-2016-cvprw.Aquesta tesis estudia mètodes per resoldre tasques de Visual Question-Answering emprant tècniques de Deep Learning. Com a pas preliminar, explorem les xarxes Long Short-Term Memory (LSTM) que s'utilitzen en el Processat del Llenguatge Natural (NLP) per atacar tasques de Question-Answering basades únicament en text. A continuació modifiquem el model anterior per acceptar una imatge com a entrada juntament amb la pregunta. Per aquest propòsit, estudiem l'ús de les xarxes convolucionals VGG-16 i KCNN per tal d'extreure els descriptors visuals de la imatge. Aquests descriptors són fusionats amb el word embedding o sentence embedding de la pregunta per poder predir la resposta. Aquest treball ha estat presentat al Visual Question Answering Challenge 2016, on ha obtingut una precisió del 53,62% en les dades de test. El software desenvolupat ha emprat bones pràctiques en programació i ha seguit les directrius d'estil de Python, prove ïnt un projecte base en Keras consistent a diferents configuracions. El codi font i els models són públics a https://github.com/imatge-upc/vqa-2016-cvprw

    What is going on in the world? A display platform for media understanding

    No full text
    News broadcasters and on-line publishers daily generate a large amount of articles and videos describing events currently happening in the world. In this work, we present a system that automatically indexes videos from a library and links them to stories developing in the news. The user interface displays in an intuitive manner the links between videos and stories and allows navigation through related content by using associated tags. This interface is a powerful industrial tool for publishers to index, retrieve and visualize their video content. It helps them identify which topics require more attention or retrieve related content that has already been published about the stories.Peer ReviewedPostprint (published version

    What is going on in the world? A display platform for media understanding

    No full text
    News broadcasters and on-line publishers daily generate a large amount of articles and videos describing events currently happening in the world. In this work, we present a system that automatically indexes videos from a library and links them to stories developing in the news. The user interface displays in an intuitive manner the links between videos and stories and allows navigation through related content by using associated tags. This interface is a powerful industrial tool for publishers to index, retrieve and visualize their video content. It helps them identify which topics require more attention or retrieve related content that has already been published about the stories.Peer Reviewe

    ViTS: Video tagging system from massive web multimedia collections

    No full text
    The popularization of multimedia content on the Web has arised the need to automatically understand, index and retrieve it. In this paper we present ViTS, an automatic Video Tagging System which learns from videos, their web context and comments shared on social networks. ViTS analyses massive multimedia collections by Internet crawling, and maintains a knowledge base that updates in real time with no need of human supervision. As a result, each video is indexed with a rich set of labels and linked with other related contents. ViTS is an industrial product under exploitation with a vocabulary of over 2.5M concepts, capable of indexing more than 150k videos per month. We compare the quality and completeness of our tags with respect to the ones in the YouTube-8M dataset, and we show how ViTS enhances the semantic annotation of the videos with a larger number of labels (10.04 tags/video), with an accuracy of 80,87%
    corecore