12 research outputs found

    Filming Animals: Portable Cameras in Animal Media Practice

    Get PDF

    Event-Guided Procedure Planning from Instructional Videos with Text Supervision

    Full text link
    In this work, we focus on the task of procedure planning from instructional videos with text supervision, where a model aims to predict an action sequence to transform the initial visual state into the goal visual state. A critical challenge of this task is the large semantic gap between observed visual states and unobserved intermediate actions, which is ignored by previous works. Specifically, this semantic gap refers to that the contents in the observed visual states are semantically different from the elements of some action text labels in a procedure. To bridge this semantic gap, we propose a novel event-guided paradigm, which first infers events from the observed states and then plans out actions based on both the states and predicted events. Our inspiration comes from that planning a procedure from an instructional video is to complete a specific event and a specific event usually involves specific actions. Based on the proposed paradigm, we contribute an Event-guided Prompting-based Procedure Planning (E3P) model, which encodes event information into the sequential modeling process to support procedure planning. To further consider the strong action associations within each event, our E3P adopts a mask-and-predict approach for relation mining, incorporating a probabilistic masking scheme for regularization. Extensive experiments on three datasets demonstrate the effectiveness of our proposed model.Comment: Accepted to ICCV 202

    PDPP:Projected Diffusion for Procedure Planning in Instructional Videos

    Full text link
    In this paper, we study the problem of procedure planning in instructional videos, which aims to make goal-directed plans given the current visual observations in unstructured real-life videos. Previous works cast this problem as a sequence planning problem and leverage either heavy intermediate visual observations or natural language instructions as supervision, resulting in complex learning schemes and expensive annotation costs. In contrast, we treat this problem as a distribution fitting problem. In this sense, we model the whole intermediate action sequence distribution with a diffusion model (PDPP), and thus transform the planning problem to a sampling process from this distribution. In addition, we remove the expensive intermediate supervision, and simply use task labels from instructional videos as supervision instead. Our model is a U-Net based diffusion model, which directly samples action sequences from the learned distribution with the given start and end observations. Furthermore, we apply an efficient projection method to provide accurate conditional guides for our model during the learning and sampling process. Experiments on three datasets with different scales show that our PDPP model can achieve the state-of-the-art performance on multiple metrics, even without the task supervision. Code and trained models are available at https://github.com/MCG-NJU/PDPP.Comment: Accepted as a highlight paper at CVPR 202

    Інформаційна технологія штучного сприйняття роботехнічною системою лісових умов

    Get PDF
    Дипломний проект присвячений розробці системи сприйняття потенційно самозаймистої рослинності у лісі. В роботі проведено аналіз предметної області з аналізом аналогів системи, визначення мети проекту, засобів реалізації, планування та проектування роботи. Представлена поетапна розробка моделі класифікації зображень, показані налаштування нейронної мережі, збір та підготовка навчальних даних та описано процес навчання. Після реалізації проекту проведено оцінку роботи технології. Результатом проведеної роботи є інформаційна технологія штучного сприйняття роботехнічною системою лісових умов
    corecore