2 research outputs found

    On making sense of neural networks in road analysis

    No full text
    Neural networks have been treated as “black boxes” for the majority of the machine learning community. The difficulty in making sense of neural networks lies in the complex topology of the hidden layers. Although there have been works in the literature aimed at demystifying the way neural networks operate, making sense of the hidden layer still remains a challenge. In this work, we propose a way to derive physical meaning from the hidden layer by mapping our neural network to the topology of a Bayesian network. Using this mapping, we enhance the probabilities of the Bayesian network resulting in a hybrid model that outperforms both the Bayesian and neural networks in the task of traffic accident prediction. Our analysis suggests that a neural network can estimate the node probabilities of a Bayesian network if mapped accordingly

    On making sense of neural networks in road analysis

    No full text
    Neural networks have been treated as “black boxes” for the majority of the machine learning community. The difficulty in making sense of neural networks lies in the complex topology of the hidden layers. Although there have been works in the literature aimed at demystifying the way neural networks operate, making sense of the hidden layer still remains a challenge. In this work, we propose a way to derive physical meaning from the hidden layer by mapping our neural network to the topology of a Bayesian network. Using this mapping, we enhance the probabilities of the Bayesian network resulting in a hybrid model that outperforms both the Bayesian and neural networks in the task of traffic accident prediction. Our analysis suggests that a neural network can estimate the node probabilities of a Bayesian network if mapped accordingly
    corecore