893 research outputs found

    Self-Driving Cars: Exploring the Potential of Using Convolutional Neural Network to Overcome Road Variation

    Get PDF
    The use of self-driving cars can benefit the society in many ways, such as reducing traffic accidents and enabling disabled people to travel independently. The potential of reducing traffic accidents can be considered most important, since in 2017, mistakes made by human drivers were the cause of over 90% of the traffic accidents, leading to 40,100 people’s deaths in the United States. If human drivers were replaced by autonomous systems, the number of traffic accidents would decrease. Although the concept of self-driving car was raised since at least the 1920s, a commonly accepted development of self-driving car has not yet appeared. A significant challenge is the creation of a system that can accurately detect the environment around itself and then form the right driving command. Recent progress in deep learning suggested that convolutional neural networks are a form of machine learning that can be trained to extract features and use those features to control a car. This project focuses on extending the network model in the paper published by NVIDA in 2016. The aim of the project is to evaluate how well a convolutional neural network could perform on a simple, simulated roadway with road varying and missing road edges

    Improve Long-term Memory Learning Through Rescaling the Error Temporally

    Full text link
    This paper studies the error metric selection for long-term memory learning in sequence modelling. We examine the bias towards short-term memory in commonly used errors, including mean absolute/squared error. Our findings show that all temporally positive-weighted errors are biased towards short-term memory in learning linear functionals. To reduce this bias and improve long-term memory learning, we propose the use of a temporally rescaled error. In addition to reducing the bias towards short-term memory, this approach can also alleviate the vanishing gradient issue. We conduct numerical experiments on different long-memory tasks and sequence models to validate our claims. Numerical results confirm the importance of appropriate temporally rescaled error for effective long-term memory learning. To the best of our knowledge, this is the first work that quantitatively analyzes different errors' memory bias towards short-term memory in sequence modelling.Comment: 12 pages, 7 figure

    Inverse Approximation Theory for Nonlinear Recurrent Neural Networks

    Full text link
    We prove an inverse approximation theorem for the approximation of nonlinear sequence-to-sequence relationships using RNNs. This is a so-called Bernstein-type result in approximation theory, which deduces properties of a target function under the assumption that it can be effectively approximated by a hypothesis space. In particular, we show that nonlinear sequence relationships, viewed as functional sequences, that can be stably approximated by RNNs with hardtanh/tanh activations must have an exponential decaying memory structure -- a notion that can be made precise. This extends the previously identified curse of memory in linear RNNs into the general nonlinear setting, and quantifies the essential limitations of the RNN architecture for learning sequential relationships with long-term memory. Based on the analysis, we propose a principled reparameterization method to overcome the limitations. Our theoretical results are confirmed by numerical experiments
    • …
    corecore