24,112 research outputs found
Graph Neural Machine: A New Model for Learning with Tabular Data
In recent years, there has been a growing interest in mapping data from
different domains to graph structures. Among others, neural network models such
as the multi-layer perceptron (MLP) can be modeled as graphs. In fact, MLPs can
be represented as directed acyclic graphs. Graph neural networks (GNNs) have
recently become the standard tool for performing machine learning tasks on
graphs. In this work, we show that an MLP is equivalent to an asynchronous
message passing GNN model which operates on the MLP's graph representation. We
then propose a new machine learning model for tabular data, the so-called Graph
Neural Machine (GNM), which replaces the MLP's directed acyclic graph with a
nearly complete graph and which employs a synchronous message passing scheme.
We show that a single GNM model can simulate multiple MLP models. We evaluate
the proposed model in several classification and regression datasets. In most
cases, the GNM model outperforms the MLP architecture
Graph Laplacians and Stabilization of Vehicle Formations
Control of vehicle formations has emerged as a topic of significant interest to the controls community. In this paper, we merge tools from graph theory and control theory to derive stability criteria for formation stabilization. The interconnection between vehicles (i.e., which vehicles are sensed by other vehicles) is modeled as a graph, and the eigenvalues of the Laplacian matrix of the graph are used in stating a Nyquist-like stability criterion for vehicle formations. The location of the Laplacian eigenvalues can be correlated to the graph structure, and therefore used to identify desirable and undesirable formation interconnection topologies
SANet: Structure-Aware Network for Visual Tracking
Convolutional neural network (CNN) has drawn increasing interest in visual
tracking owing to its powerfulness in feature extraction. Most existing
CNN-based trackers treat tracking as a classification problem. However, these
trackers are sensitive to similar distractors because their CNN models mainly
focus on inter-class classification. To address this problem, we use
self-structure information of object to distinguish it from distractors.
Specifically, we utilize recurrent neural network (RNN) to model object
structure, and incorporate it into CNN to improve its robustness to similar
distractors. Considering that convolutional layers in different levels
characterize the object from different perspectives, we use multiple RNNs to
model object structure in different levels respectively. Extensive experiments
on three benchmarks, OTB100, TC-128 and VOT2015, show that the proposed
algorithm outperforms other methods. Code is released at
http://www.dabi.temple.edu/~hbling/code/SANet/SANet.html.Comment: In CVPR Deep Vision Workshop, 201
- …