11 research outputs found
Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data
Conventional sequential learning methods such as Recurrent Neural Networks
(RNNs) focus on interactions between consecutive inputs, i.e. first-order
Markovian dependency. However, most of sequential data, as seen with videos,
have complex dependency structures that imply variable-length semantic flows
and their compositions, and those are hard to be captured by conventional
methods. Here, we propose Cut-Based Graph Learning Networks (CB-GLNs) for
learning video data by discovering these complex structures of the video. The
CB-GLNs represent video data as a graph, with nodes and edges corresponding to
frames of the video and their dependencies respectively. The CB-GLNs find
compositional dependencies of the data in multilevel graph forms via a
parameterized kernel with graph-cut and a message passing framework. We
evaluate the proposed method on the two different tasks for video
understanding: Video theme classification (Youtube-8M dataset) and Video
Question and Answering (TVQA dataset). The experimental results show that our
model efficiently learns the semantic compositional structure of video data.
Furthermore, our model achieves the highest performance in comparison to other
baseline methods.Comment: 8 pages, 3 figures, Association for the Advancement of Artificial
Intelligence (AAAI2020). arXiv admin note: substantial text overlap with
arXiv:1907.0170
DramaQA: Character-Centered Video Story Understanding with Hierarchical QA
Despite recent progress on computer vision and natural language processing,
developing video understanding intelligence is still hard to achieve due to the
intrinsic difficulty of story in video. Moreover, there is not a theoretical
metric for evaluating the degree of video understanding. In this paper, we
propose a novel video question answering (Video QA) task, DramaQA, for a
comprehensive understanding of the video story. The DramaQA focused on two
perspectives: 1) hierarchical QAs as an evaluation metric based on the
cognitive developmental stages of human intelligence. 2) character-centered
video annotations to model local coherence of the story. Our dataset is built
upon the TV drama "Another Miss Oh" and it contains 16,191 QA pairs from 23,928
various length video clips, with each QA pair belonging to one of four
difficulty levels. We provide 217,308 annotated images with rich
character-centered annotations, including visual bounding boxes, behaviors, and
emotions of main characters, and coreference resolved scripts. Additionally, we
provide analyses of the dataset as well as Dual Matching Multistream model
which effectively learns character-centered representations of video to answer
questions about the video. We are planning to release our dataset and model
publicly for research purposes and expect that our work will provide a new
perspective on video story understanding research.Comment: 21 pages, 10 figures, submitted to ECCV 202
How Well Do Large Language Models Truly Ground?
Reliance on the inherent knowledge of Large Language Models (LLMs) can cause
issues such as hallucinations, lack of control, and difficulties in integrating
variable knowledge. To mitigate this, LLMs can be probed to generate responses
by grounding on external context, often given as input (knowledge-augmented
models). Yet, previous research is often confined to a narrow view of the term
"grounding", often only focusing on whether the response contains the correct
answer or not, which does not ensure the reliability of the entire response. To
address this limitation, we introduce a strict definition of grounding: a model
is considered truly grounded when its responses (1) fully utilize necessary
knowledge from the provided context, and (2) don't exceed the knowledge within
the contexts. We introduce a new dataset and a grounding metric to assess this
new definition and perform experiments across 13 LLMs of different sizes and
training methods to provide insights into the factors that influence grounding
performance. Our findings contribute to a better understanding of how to
improve grounding capabilities and suggest an area of improvement toward more
reliable and controllable LLM applications
Semiparametric Token-Sequence Co-Supervision
In this work, we introduce a semiparametric token-sequence co-supervision
training method. It trains a language model by simultaneously leveraging
supervision from the traditional next token prediction loss which is calculated
over the parametric token embedding space and the next sequence prediction loss
which is calculated over the nonparametric sequence embedding space. The
nonparametric sequence embedding space is constructed by a separate language
model tasked to condense an input text into a single representative embedding.
Our experiments demonstrate that a model trained via both supervisions
consistently surpasses models trained via each supervision independently.
Analysis suggests that this co-supervision encourages a broader generalization
capability across the model. Especially, the robustness of parametric token
space which is established during the pretraining step tends to effectively
enhance the stability of nonparametric sequence embedding space, a new space
established by another language model
Hexa: Self-Improving for Knowledge-Grounded Dialogue System
A common practice in knowledge-grounded dialogue generation is to explicitly
utilize intermediate steps (e.g., web-search, memory retrieval) with modular
approaches. However, data for such steps are often inaccessible compared to
those of dialogue responses as they are unobservable in an ordinary dialogue.
To fill in the absence of these data, we develop a self-improving method to
improve the generative performances of intermediate steps without the ground
truth data. In particular, we propose a novel bootstrapping scheme with a
guided prompt and a modified loss function to enhance the diversity of
appropriate self-generated responses. Through experiments on various benchmark
datasets, we empirically demonstrate that our method successfully leverages a
self-improving mechanism in generating intermediate and final responses and
improves the performances on the task of knowledge-grounded dialogue
generation
Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data
Conventional sequential learning methods such as Recurrent Neural Networks (RNNs) focus on interactions between consecutive inputs, i.e. first-order Markovian dependency. However, most of sequential data, as seen with videos, have complex dependency structures that imply variable-length semantic flows and their compositions, and those are hard to be captured by conventional methods. Here, we propose Cut-Based Graph Learning Networks (CB-GLNs) for learning video data by discovering these complex structures of the video. The CB-GLNs represent video data as a graph, with nodes and edges corresponding to frames of the video and their dependencies respectively. The CB-GLNs find compositional dependencies of the data in multilevel graph forms via a parameterized kernel with graph-cut and a message passing framework. We evaluate the proposed method on the two different tasks for video understanding: Video theme classification (Youtube-8M dataset (Abu-El-Haija et al. 2016)) and Video Question and Answering (TVQA dataset(Lei et al. 2018)). The experimental results show that our model efficiently learns the semantic compositional structure of video data. Furthermore, our model achieves the highest performance in comparison to other baseline methods