421,450 research outputs found

    Performance evaluation on mitral valve motion feature tracking using Kanade-Lucas- Tomasi (KLT) algorithm based eigenvalue measurement

    Get PDF
    This paper provides the explanation of the concepts of point tracking technique to be implemented in mitral valve locating in video frames. Object tracking has been used for many applications in motion-based recognition and monitoring. This paper discussed about the implementation of Kanade-Lucas-Tomasi (KLT) algorithm for automatic detection of the mitral valve in video frames. An experiment is carried out which covers the patient scanning who suffers from mitral valve disease. The performance of the method is validated by comparing the value of point track per frames. It is found that the point tracker systems can track the mitral valve up to 0.3s

    Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks

    Full text link
    Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision problems. However, these deep models are perceived as "black box" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest in developing explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose a generalized method called Grad-CAM++ that can provide better visual explanations of CNN model predictions, in terms of better object localization as well as explaining occurrences of multiple object instances in a single image, when compared to state-of-the-art. We provide a mathematical derivation for the proposed method, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the corresponding class label. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ provides promising human-interpretable visual explanations for a given CNN architecture across multiple tasks including classification, image caption generation and 3D action recognition; as well as in new settings such as knowledge distillation.Comment: 17 Pages, 15 Figures, 11 Tables. Accepted in the proceedings of IEEE Winter Conf. on Applications of Computer Vision (WACV2018). Extended version is under review at IEEE Transactions on Pattern Analysis and Machine Intelligenc

    Model-based Cognitive Neuroscience: Multifield Mechanistic Integration in Practice

    Get PDF
    Autonomist accounts of cognitive science suggest that cognitive model building and theory construction (can or should) proceed independently of findings in neuroscience. Common functionalist justifications of autonomy rely on there being relatively few constraints between neural structure and cognitive function (e.g., Weiskopf, 2011). In contrast, an integrative mechanistic perspective stresses the mutual constraining of structure and function (e.g., Piccinini & Craver, 2011; Povich, 2015). In this paper, I show how model-based cognitive neuroscience (MBCN) epitomizes the integrative mechanistic perspective and concentrates the most revolutionary elements of the cognitive neuroscience revolution (Boone & Piccinini, 2016). I also show how the prominent subset account of functional realization supports the integrative mechanistic perspective I take on MBCN and use it to clarify the intralevel and interlevel components of integration
    corecore