59,030 research outputs found
Teaching Categories to Human Learners with Visual Explanations
We study the problem of computer-assisted teaching with explanations.
Conventional approaches for machine teaching typically only provide feedback at
the instance level e.g., the category or label of the instance. However, it is
intuitive that clear explanations from a knowledgeable teacher can
significantly improve a student's ability to learn a new concept. To address
these existing limitations, we propose a teaching framework that provides
interpretable explanations as feedback and models how the learner incorporates
this additional information. In the case of images, we show that we can
automatically generate explanations that highlight the parts of the image that
are responsible for the class label. Experiments on human learners illustrate
that, on average, participants achieve better test set performance on
challenging categorization tasks when taught with our interpretable approach
compared to existing methods
Teaching categories to human learners with visual explanations
We study the problem of computer-assisted teaching with explanations. Conventional approaches for machine teaching typically only provide feedback at the instance level e.g., the category or label of the instance. However, it is intuitive that clear explanations from a knowledgeable teacher can significantly improve a student's ability to learn a new concept. To address these existing limitations, we propose a teaching framework that provides interpretable explanations as feedback and models how the learner incorporates this additional information. In the case of images, we show that we can automatically generate explanations that highlight the parts of the image that are responsible for the class label. Experiments on human learners illustrate that, on average, participants achieve better test set performance on challenging categorization tasks when taught with our interpretable approach compared to existing methods
Teaching categories to human learners with visual explanations
We study the problem of computer-assisted teaching with explanations. Conventional approaches for machine teaching typically only provide feedback at the instance level e.g., the category or label of the instance. However, it is intuitive that clear explanations from a knowledgeable teacher can significantly improve a student's ability to learn a new concept. To address these existing limitations, we propose a teaching framework that provides interpretable explanations as feedback and models how the learner incorporates this additional information. In the case of images, we show that we can automatically generate explanations that highlight the parts of the image that are responsible for the class label. Experiments on human learners illustrate that, on average, participants achieve better test set performance on challenging categorization tasks when taught with our interpretable approach compared to existing methods
Interpretable Machine Teaching via Feature Feedback
A studentâs ability to learn a new concept can be greatly improved by providing them with clear and easy to understand explanations from a knowledgeable teacher. However, many existing approaches for machine teaching only give a limited amount of feedback to the student. For example, in the case of learning visual categories, this feedback could be the class label of the object present in the image. Instead, we propose a teaching framework that includes both instance-level labels as well as explanations in the form of feature-level feedback to the human learners. For image categorization, our feature-level feedback consists of a highlighted part or region in an image that explains the class label. We perform experiments on real human participants and show that learners that are taught with feature-level feedback perform better at test time compared to existing methods
Interpretable Machine Teaching via Feature Feedback
A studentâs ability to learn a new concept can be greatly improved by providing them with clear and easy to understand explanations from a knowledgeable teacher. However, many existing approaches for machine teaching only give a limited amount of feedback to the student. For example, in the case of learning visual categories, this feedback could be the class label of the object present in the image. Instead, we propose a teaching framework that includes both instance-level labels as well as explanations in the form of feature-level feedback to the human learners. For image categorization, our feature-level feedback consists of a highlighted part or region in an image that explains the class label. We perform experiments on real human participants and show that learners that are taught with feature-level feedback perform better at test time compared to existing methods
What do faculties specializing in brain and neural sciences think about, and how do they approach, brain-friendly teaching-learning in Iran?
Objective: to investigate the perspectives and experiences of the faculties specializing in brain and neural sciences regarding brain-friendly teaching-learning in Iran. Methods: 17 faculties from 5 universities were selected by purposive sampling (2018). In-depth semi-structured interviews with directed content analysis were used. Results: 31 sub-subcategories, 10 subcategories, and 4 categories were formed according to the âGeneral teaching modelâ. âMentorshipâ was a newly added category. Conclusions: A neuro-educational approach that consider the roles of the learnerâs brain uniqueness, executive function facilitation, and the valence system are important to learning. Such learning can be facilitated through cognitive load considerations, repetition, deep questioning, visualization, feedback, and reflection. The contextualized, problem-oriented, social, multi-sensory, experiential, spaced learning, and brain-friendly evaluation must be considered. Mentorship is important for coaching and emotional facilitation
- âŠ