45 research outputs found

    Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference

    Full text link
    Deep learning models have achieved remarkable success in natural language inference (NLI) tasks. While these models are widely explored, they are hard to interpret and it is often unclear how and why they actually work. In this paper, we take a step toward explaining such deep learning based models through a case study on a popular neural model for NLI. In particular, we propose to interpret the intermediate layers of NLI models by visualizing the saliency of attention and LSTM gating signals. We present several examples for which our methods are able to reveal interesting insights and identify the critical information contributing to the model decisions.Comment: 11 pages, 11 figures, accepted as a short paper at EMNLP 201

    Smooth Transition of Vehicles' Maximum Speed for Lane Detection based on Computer Vision

    Get PDF
    This paper presents a prototype electric scooter designed to detect the driving lane via computer vision and automatically set the vehicular configuration. The electric scooter can drive on the pedestrian, bicycle, or car lanes. The government enforces maximum speeds on each lane for the electric scooter. Our prototype scooter would apply those regulations securely, with the help of a computer vision component. However, the safety of such a system is still part of the concern and research is going on the security and safety aspects of such vehicular systems. The maximum speed changes while the driver is riding the vehicle at the fastest possible speed could cause a safety hazard. To prevent that, we proposed to use the logarithmic speed reduction or acceleration. The results show that such an algorithm will smooth the transition between the maximum of the vehicle
    corecore