12,520 research outputs found

    CAR-Net: Clairvoyant Attentive Recurrent Network

    Full text link
    We present an interpretable framework for path prediction that leverages dependencies between agents' behaviors and their spatial navigation environment. We exploit two sources of information: the past motion trajectory of the agent of interest and a wide top-view image of the navigation scene. We propose a Clairvoyant Attentive Recurrent Network (CAR-Net) that learns where to look in a large image of the scene when solving the path prediction task. Our method can attend to any area, or combination of areas, within the raw image (e.g., road intersections) when predicting the trajectory of the agent. This allows us to visualize fine-grained semantic elements of navigation scenes that influence the prediction of trajectories. To study the impact of space on agents' trajectories, we build a new dataset made of top-view images of hundreds of scenes (Formula One racing tracks) where agents' behaviors are heavily influenced by known areas in the images (e.g., upcoming turns). CAR-Net successfully attends to these salient regions. Additionally, CAR-Net reaches state-of-the-art accuracy on the standard trajectory forecasting benchmark, Stanford Drone Dataset (SDD). Finally, we show CAR-Net's ability to generalize to unseen scenes.Comment: The 2nd and 3rd authors contributed equall

    Delta Networks for Optimized Recurrent Network Computation

    Full text link
    Many neural networks exhibit stability in their activation patterns over time in response to inputs from sensors operating under real-world conditions. By capitalizing on this property of natural signals, we propose a Recurrent Neural Network (RNN) architecture called a delta network in which each neuron transmits its value only when the change in its activation exceeds a threshold. The execution of RNNs as delta networks is attractive because their states must be stored and fetched at every timestep, unlike in convolutional neural networks (CNNs). We show that a naive run-time delta network implementation offers modest improvements on the number of memory accesses and computes, but optimized training techniques confer higher accuracy at higher speedup. With these optimizations, we demonstrate a 9X reduction in cost with negligible loss of accuracy for the TIDIGITS audio digit recognition benchmark. Similarly, on the large Wall Street Journal speech recognition benchmark even existing networks can be greatly accelerated as delta networks, and a 5.7x improvement with negligible loss of accuracy can be obtained through training. Finally, on an end-to-end CNN trained for steering angle prediction in a driving dataset, the RNN cost can be reduced by a substantial 100X

    MIMO Channel Information Feedback Using Deep Recurrent Network

    Get PDF
    In a multiple-input multiple-output (MIMO) system, the availability of channel state information (CSI) at the transmitter is essential for performance improvement. Recent convolutional neural network (NN) based techniques show competitive ability in realizing CSI compression and feedback. By introducing a new NN architecture, we enhance the accuracy of quantized CSI feedback in MIMO communications. The proposed NN architecture invokes a module named long short-term memory (LSTM) which admits the NN to benefit from exploiting temporal and frequency correlations of wireless channels. Compromising performance with complexity, we further modify the NN architecture with a significantly reduced number of parameters to be trained. Finally, experiments show that the proposed NN architectures achieve better performance in terms of both CSI compression and recovery accuracy
    corecore