11 research outputs found

    RVOS: end-to-end recurrent network for video object segmentation

    Get PDF
    Multiple object video object segmentation is a challenging task, specially for the zero-shot case, when no object mask is given at the initial frame and the model has to find the objects to be segmented along the sequence. In our work, we propose a Recurrent network for multiple object Video Object Segmentation (RVOS) that is fully end-to-end trainable. Our model incorporates recurrence on two different domains: (i) the spatial, which allows to discover the different object instances within a frame, and (ii) the temporal, which allows to keep the coherence of the segmented objects along time. We train RVOS for zero-shot video object segmentation and are the first ones to report quantitative results for DAVIS-2017 and YouTube-VOS benchmarks. Further, we adapt RVOS for one-shot video object segmentation by using the masks obtained in previous time steps as inputs to be processed by the recurrent module. Our model reaches comparable results to state-of-the-art techniques in YouTube-VOS benchmark and outperforms all previous video object segmentation methods not using online learning in the DAVIS-2017 benchmark. Moreover, our model achieves faster inference runtimes than previous methods, reaching 44ms/frame on a P100 GPU.Peer ReviewedPostprint (published version

    A closer look at referring expressions for video object segmentation

    Get PDF
    The task of Language-guided Video Object Segmentation (LVOS) aims at generating binary masks for an object referred by a linguistic expression. When this expression unambiguously describes an object in the scene, it is named referring expression (RE). Our work argues that existing benchmarks used for LVOS are mainly composed of trivial cases, in which referents can be identified with simple phrases. Our analysis relies on a new categorization of the referring expressions in the DAVIS-2017 and Actor-Action datasets into trivial and non-trivial REs, where the non-trivial REs are further annotated with seven RE semantic categories. We leverage these data to analyze the performance of RefVOS, a novel neural network that obtains competitive results for the task of language-guided image segmentation and state of the art results for LVOS. Our study indicates that the major challenges for the task are related to understanding motion and static actions.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work was partially supported by the projects PID2019-107255GB-C22 and PID2020-117142GB-I00 funded by MCIN/ AEI /10.13039/501100011033 Spanish Ministry of Science, and the grant 2017-SGR-1414 of the Government of Catalonia. This work was also partially supported by the project RTI2018-095232-B-C22 funded by the Spanish Ministry of Science, Innovation and Universities.Peer ReviewedPostprint (published version

    RVOS: end-to-end recurrent network for video object segmentation

    Get PDF
    Multiple object video object segmentation is a challenging task, specially for the zero-shot case, when no object mask is given at the initial frame and the model has to find the objects to be segmented along the sequence. In our work, we propose a Recurrent network for multiple object Video Object Segmentation (RVOS) that is fully end-to-end trainable. Our model incorporates recurrence on two different domains: (i) the spatial, which allows to discover the different object instances within a frame, and (ii) the temporal, which allows to keep the coherence of the segmented objects along time. We train RVOS for zero-shot video object segmentation and are the first ones to report quantitative results for DAVIS-2017 and YouTube-VOS benchmarks. Further, we adapt RVOS for one-shot video object segmentation by using the masks obtained in previous time steps as inputs to be processed by the recurrent module. Our model reaches comparable results to state-of-the-art techniques in YouTube-VOS benchmark and outperforms all previous video object segmentation methods not using online learning in the DAVIS-2017 benchmark. Moreover, our model achieves faster inference runtimes than previous methods, reaching 44ms/frame on a P100 GPU.This research was supported by the Spanish Ministry ofEconomy and Competitiveness and the European RegionalDevelopment Fund (TIN2015-66951-C2-2-R, TIN2015-65316-P & TEC2016-75976-R), the BSC-CNS SeveroOchoa SEV-2015-0493 and LaCaixa-Severo Ochoa Inter-national Doctoral Fellowship programs, the 2017 SGR 1414and the Industrial Doctorates 2017-DI-064 & 2017-DI-028from the Government of CataloniaPeer ReviewedPostprint (published version

    RVOS: end-to-end recurrent network for video object segmentation

    No full text
    Multiple object video object segmentation is a challenging task, specially for the zero-shot case, when no object mask is given at the initial frame and the model has to find the objects to be segmented along the sequence. In our work, we propose a Recurrent network for multiple object Video Object Segmentation (RVOS) that is fully end-to-end trainable. Our model incorporates recurrence on two different domains: (i) the spatial, which allows to discover the different object instances within a frame, and (ii) the temporal, which allows to keep the coherence of the segmented objects along time. We train RVOS for zero-shot video object segmentation and are the first ones to report quantitative results for DAVIS-2017 and YouTube-VOS benchmarks. Further, we adapt RVOS for one-shot video object segmentation by using the masks obtained in previous time steps as inputs to be processed by the recurrent module. Our model reaches comparable results to state-of-the-art techniques in YouTube-VOS benchmark and outperforms all previous video object segmentation methods not using online learning in the DAVIS-2017 benchmark. Moreover, our model achieves faster inference runtimes than previous methods, reaching 44ms/frame on a P100 GPU.Peer Reviewe

    RVOS: end-to-end recurrent network for video object segmentation

    No full text
    Multiple object video object segmentation is a challenging task, specially for the zero-shot case, when no object mask is given at the initial frame and the model has to find the objects to be segmented along the sequence. In our work, we propose a Recurrent network for multiple object Video Object Segmentation (RVOS) that is fully end-to-end trainable. Our model incorporates recurrence on two different domains: (i) the spatial, which allows to discover the different object instances within a frame, and (ii) the temporal, which allows to keep the coherence of the segmented objects along time. We train RVOS for zero-shot video object segmentation and are the first ones to report quantitative results for DAVIS-2017 and YouTube-VOS benchmarks. Further, we adapt RVOS for one-shot video object segmentation by using the masks obtained in previous time steps as inputs to be processed by the recurrent module. Our model reaches comparable results to state-of-the-art techniques in YouTube-VOS benchmark and outperforms all previous video object segmentation methods not using online learning in the DAVIS-2017 benchmark. Moreover, our model achieves faster inference runtimes than previous methods, reaching 44ms/frame on a P100 GPU.Peer Reviewe

    A prothrombin activator from Bothrops erythromelas (jararaca-da-seca) snake venom: characterization and molecular cloning.

    Get PDF
    A novel prothrombin activator enzyme, which we have named 'berythractivase', was isolated from Bothrops erythromelas (jararaca-da-seca) snake venom. Berythractivase was purified by a single cation-exchange-chromatography step on a Resource S (Amersham Biosciences) column. The overall purification (31-fold) indicates that berythractivase comprises about 5% of the crude venom. It is a single-chain protein with a molecular mass of 78 kDa. SDS/PAGE of prothrombin after activation by berythractivase showed fragment patterns similar to those generated by group A prothrombin activators, which convert prothrombin into meizothrombin, independent of the prothrombinase complex. Chelating agents, such as EDTA and o -phenanthroline, rapidly inhibited the enzymic activity of berythractivase, like a typical metalloproteinase. Human fibrinogen A alpha-chain was slowly digested only after longer incubation with berythractivase, and no effect on the beta- or gamma-chains was observed. Berythractivase was also capable of triggering endothelial proinflammatory and procoagulant cell responses. von Willebrand factor was released, and the surface expression of both intracellular adhesion molecule-1 and E-selectin was up-regulated by berythractivase in cultured human umbilical-vein endothelial cells. The complete berythractivase cDNA was cloned from a B. erythromelas venom-gland cDNA library. The cDNA sequence possesses 2330 bp and encodes a preproprotein with significant sequence similarity to many other mature metalloproteinases reported from snake venoms. Berythractivase contains metalloproteinase, desintegrin-like and cysteine-rich domains. However, berythractivase did not elicit any haemorrhagic response. These results show that, although the primary structure of berythractivase is related to that of snake-venom haemorrhagic metalloproteinases and functionally similar to group A prothrombin activators, it is a prothrombin activator devoid of haemorrhagic activity. This is a feature not observed for most of the snake venom metalloproteinases, including the group A prothrombin activators

    Glossari il·lustrat de morfologia botànica bàsica. Racons verds: aportació al coneixement morfològic dels espermatòfits

    No full text
    Els resultats s’emmarquen en el projecte d’innovació docent «Innovació en l’ambientalització curricular de la Botànica en el grau de Farmàcia: Jardins per a la Salut» (codi 2018PID-UB/034) del Grup d’Innovació Docent de Botànica Aplicada a les Ciències Farmacèutiques (GIBAF).Facultat de Farmàcia, Universitat de Barcelona. Ensenyament: Grau de Farmàcia. Assignatura: Botànica farmacèutica. Curs: 2017-2018. Coordinadors: Cèsar Blanché, Carles Benedí, Maria Bosc i Joan SimonProjecte: 2018PID-UB/034Es presenta a continuació un glossari il·lustrat realitzat per 35 estudiants de l’assignatura Botànica Farmacèutica del grau de Farmàcia. Els resultats s’emmarquen en el projecte d’innovació docent «Innovació en l’ambientalització curricular de la Botànica en el grau de Farmàcia: Jardins per a la Salut» (codi 2018PID-UB/034) del Grup d’Innovació Docent de Botànica Aplicada a les Ciències Farmacèutiques (GIBAF). El glossari s’ha il·lustrat a partir de fotografies dels estudiants sobre detalls morfològics de les espècies del jardí de la Facultat de Farmàcia que s’ha inclòs en els «Racons Verds de la UB». El resultat ha estat la descripció i il·lustració de 80 caràcters d’organografia vegetativa i reproductora i constitueixen un nou recurs docent en obert realitzat pels propis estudiants. El treball dut a terme s’ha realitzat de forma voluntària, tutoritzada i restringida en tres grups de teoria (M2, M3 i T3) de l'assignatura troncal de Botànica Fermacèutica del grau de Farmàcia. Els resultats hanrepercutit fins a 0,5 punts sobre la nota final un cop s’ha superat l’assignatura.Grup d'Innovació Docent en Botànica Aplicada a les Ciències Farmacèutiques (GIBAF
    corecore