4 research outputs found

    SENSAREA, a general public video editing application

    No full text
    International audienceIn this demonstration, we present an advanced prototype of a novel general public software application that provides the user with a set of interactive tools to select and accurately track multiple objects in a video. The originality of the proposed software is that it doesn't impose a rigid modus operandi and that automatic and manual tools can be used at any moment for any object. Moreover, it is the first time that powerful video object segmentation tools are integrated in a friendly, industrial and non commercial application dedicated to accurate object tracking. With our software, special effects can be applied to the tracked objects and saved to a video file, and the object masks can also be exported for applications that need ground truth data or that want to improve the user experience with clickable videos

    Adaptive Memory Management in Video Object Segmentation

    Get PDF
    Matching-based networks have achieved state-of-the-art performance for video object segmentation (VOS) tasks by storing every-k frames in an external memory bank for future inference. Storing the intermediate frames’ predictions provides the network with richer cues for segmenting an object in the current frame. However, the size of the memory bank gradually increases with the length of the video, which slows down inference speed and makes it impractical to handle arbitrary-length videos. This thesis proposes an adaptive memory bank strategy for matching-based networks for semi-supervised video object segmentation (VOS) that can handle videos of arbitrary length by discarding obsolete features. Features are indexed based on their importance in the segmentation of the objects in previous frames. Based on the index, we discard unimportant features to accommodate new features. We present our experiments on DAVIS 2016, DAVIS 2017, and Youtube-VOS that demonstrate that our method outperforms state-of-the-art that employ first-and-latest strategy with fixed-sized memory banks and achieves comparable performance to the every-k strategy with increasing-sized memory banks. Furthermore, experiments show that our method increases inference speed by up to 80% over the very-k and 35% over first-and-latest strategies. We further investigate memory banks’ attention during the training by proposing two regularizations and studying their effects on performance

    Visual Odometer on Videos of Endoscopic Capsules (VOVEC)

    Get PDF
    Desde a sua introdução em 2001, as cápsulas endoscópicas tornaram-se o principal método para obter imagens do intestino - uma região de difícil acesso através de métodos de endoscopia tradicionais - revolucionando a maneira como os diagnósticos no campo das doenças do intestino delgado são feitos. Estas cápsulas com dimensões comparáveis a um comprimido vitamínico tiram partido de uma câmera wireless para criar vídeos de 8 a 10 horas do trato digestivo dos pacientes. Devido à longa duração dos vídeos produzidos, o diagnóstico humano é moroso, entediante e propício a erros. Para além disto, depois de encontrada uma lesão, a informação da sua localização é escassa e dependente de hardware externo, levando a que uma solução baseada apensa em software com precisão melhorada seja bastante desejada. Este trabalho advém desta necessidade e, tendo-a em mente, propomos a implementação de dois métodos baseados em deep-learning, visando melhorar em relação às limitações dos sistemas atuais de localização de cápsulas endoscópicas. Para treinar e testar as nossas redes neuronais, um dataset que contém 111 vídeos da cápsula PillCam SB3 e 338 da cápsula PillCam SB2 foi utilizado, cortesia do Centro Hospitalar do Porto (CHP).O primeiro método consiste numa simples estimação do deslocamento da cápsula ao longo do intestino delgado utilizando uma HomographyNet, uma abordagem de deep-learning supervisionado usada para o cálculo de homografias entre imagens.Já no segundo método uma posição relativa 3D da cápsula é fornecida ao longo do intestino delgado, recorrendo a um método não-supervisionado de deep-learning denominado SfMLearner. Este método combina uma DepthNet e uma PoseNet para aprender a profundidade da imagem e a posição da cápsula em simultâneo.Since its introduction in 2001, capsule endoscopy has become the leading screening method for the small bowel - a region not easily accessible with traditional endoscopy techniques - revolutionizing the way diagnostics work in the field of small bowel diseases. These capsules are vitamin-sized and leverage from a small wireless camera to create 8 to 10 hour videos of the patients digestive tract. Due to the long duration of the videos produced, the human-based diagnosis is elongated, tedious and error-prone. Moreover, once a lesion is found, the localization information is scarce and hardware dependent, entailing desirability for a software-only endoscopic capsule localization system with added precision. This work stems from this need and, bearing this in mind, we propose the implementation of two deep-learning based methods to improve upon the limitations of the techniques used so far for the capsule position estimation. To train and test our networks, a dataset of 111 PillCam SB3 and 338 PillCam SB2 videos were used, courtesy of Centro Hospitalar do Porto (CHP).The first method consists in a simple capsule displacement estimation throughout the small bowel utilizing HomographyNet, a deep learning supervised approach that is used for homography computation between images. (DeTone et al. (2016))Differently, the second proposed method is intended to provide a 3D position along the small intestine, utilizing a deep learning unsupervised approach labeled SfMLearner, which takes advantage of a combination between a DepthNet and a PoseNet to learn depth and ego-motion from video simultaneously. (Zhou et al. (2017)
    corecore