142 research outputs found
DeepGAR: Deep Graph Learning for Analogical Reasoning
Analogical reasoning is the process of discovering and mapping
correspondences from a target subject to a base subject. As the most well-known
computational method of analogical reasoning, Structure-Mapping Theory (SMT)
abstracts both target and base subjects into relational graphs and forms the
cognitive process of analogical reasoning by finding a corresponding subgraph
(i.e., correspondence) in the target graph that is aligned with the base graph.
However, incorporating deep learning for SMT is still under-explored due to
several obstacles: 1) the combinatorial complexity of searching for the
correspondence in the target graph; 2) the correspondence mining is restricted
by various cognitive theory-driven constraints. To address both challenges, we
propose a novel framework for Analogical Reasoning (DeepGAR) that identifies
the correspondence between source and target domains by assuring cognitive
theory-driven constraints. Specifically, we design a geometric constraint
embedding space to induce subgraph relation from node embeddings for efficient
subgraph search. Furthermore, we develop novel learning and optimization
strategies that could end-to-end identify correspondences that are strictly
consistent with constraints driven by the cognitive theory. Extensive
experiments are conducted on synthetic and real-world datasets to demonstrate
the effectiveness of the proposed DeepGAR over existing methods.Comment: 22nd IEEE International Conference on Data Mining (ICDM 2022
BIT: Bi-Level Temporal Modeling for Efficient Supervised Action Segmentation
We address the task of supervised action segmentation which aims to partition
a video into non-overlapping segments, each representing a different action.
Recent works apply transformers to perform temporal modeling at the
frame-level, which suffer from high computational cost and cannot well capture
action dependencies over long temporal horizons. To address these issues, we
propose an efficient BI-level Temporal modeling (BIT) framework that learns
explicit action tokens to represent action segments, in parallel performs
temporal modeling on frame and action levels, while maintaining a low
computational cost. Our model contains (i) a frame branch that uses convolution
to learn frame-level relationships, (ii) an action branch that uses transformer
to learn action-level dependencies with a small set of action tokens and (iii)
cross-attentions to allow communication between the two branches. We apply and
extend a set-prediction objective to allow each action token to represent one
or multiple action segments, thus can avoid learning a large number of tokens
over long videos with many segments. Thanks to the design of our action branch,
we can also seamlessly leverage textual transcripts of videos (when available)
to help action segmentation by using them to initialize the action tokens. We
evaluate our model on four video datasets (two egocentric and two third-person)
for action segmentation with and without transcripts, showing that BIT
significantly improves the state-of-the-art accuracy with much lower
computational cost (30 times faster) compared to existing transformer-based
methods.Comment: 9 pages, 6 figure
Recommended from our members
Understanding the Dynamic Visual World: From Motion to Semantics
We live in a dynamic world, which is continuously in motion. Perceiving and interpreting the dynamic surroundings is an essential capability for an intelligent agent. Human beings have the remarkable capability to learn from limited data, with partial or little annotation, in sharp contrast to computational perception models that rely on large-scale, manually labeled data. Reliance on strongly supervised models with manually labeled data inherently prohibits us from modeling the dynamic visual world, as manual annotations are tedious, expensive, and not scalable, especially if we would like to solve multiple scene understanding tasks at the same time. Even worse, in some cases, manual annotations are completely infeasible, such as the motion vector of each pixel (i.e., optical flow) since humans cannot reliably produce these types of labeling. In fact, living in a dynamic world, when we move around, motion information, as a result of moving camera, independently moving objects, and scene geometry, consists of abundant information, revealing the structure and complexity of our dynamic visual world. As the famous psychologist James J. Gibson suggested, āwe must perceive in order to move, but we also must move in order to perceiveā. In this thesis, we investigate how to use the motion information contained in unlabeled or partially labeled videos to better understand and synthesize the dynamic visual world.
This thesis consists of three parts. In the first part, we focus on the āmove to perceiveā aspect. When moving through the world, it is natural for an intelligent agent to associate image patterns with the magnitude of their displacement over time: as the agent moves, far away mountains donāt move much; nearby trees move a lot. This natural relationship between the appearance of objects and their apparent motion is a rich source of information about the relationship between the distance of objects and their appearance in images. We present a pretext task of estimating the relative depth of elements of a scene (i.e., ordering the pixels in an image according to distance from the viewer) recovered from motion field of unlabeled videos. The goal of this pretext task was to induce useful feature representations in deep Convolutional Neural Networks (CNNs). These induced representations, using 1.1 million video frames crawled from YouTube within one hour without any manual labeling, provide valuable starting features for the training of neural networks for downstream tasks. It is promising to match or even surpass what ImageNet pre-training gives us today, which needs a huge amount of manual labeling, on tasks such as semantic image segmentation as all of our training data comes almost for free.
In the second part, we study the āperceive to moveā aspect. As we humans look around, we do not solve a single vision task at a time. Instead, we perceive our surroundings in a holistic manner, doing visual understanding using all visual cues jointly. By simultaneously solving multiple tasks together, one task can influence another. In specific, we propose a neural network architecture, called SENSE, which shares common feature representations among four closely-related tasks: optical flow estimation, disparity estimation from stereo, occlusion detection, and semantic segmentation. The key insight is that sharing features makes the network more compact and induces better feature representations. For real-world data, however, not all an- notations of the four tasks mentioned above are always available at the same time. To this end, loss functions are designed to exploit interactions of different tasks and do not need manual annotations, to better handle partially labeled data in a semi- supervised manner, leading to superior understanding performance of the dynamic visual world.
Understanding the motion contained in a video enables us to perceive the dynamic visual world in a novel manner. In the third part, we present an approach, called SuperSloMo, which synthesizes slow-motion videos from a standard frame-rate video. Converting a plain video into a slow-motion version enables us to see memorable moments in our life that are hard to see clearly otherwise with naked eyes: a difficult skateboard trick, a dog catching a ball, etc. Such a technique also has wide applications such as generating smooth view transition on a head-mounted virtual reality (VR) devices, compressing videos, synthesizing videos with motion blur, etc
Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems
Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ).
This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)
- ā¦