74,548 research outputs found

    "Do screen captures in manuals make a difference?": a comparison between textual and visual manuals

    Get PDF
    Examines the use of screen captures in manuals. Three types of manuals were compared: one textual and two visual. The two visual manuals differed in the type of screen capture that was used. One had screen captures that showed only the relevant part of the screen, whereas the other consisted of captures of the full screen. All manuals contained exactly the same textual information. We examined immediate use on time (use as a job aid) and on learning (use as a teacher). For job-aid purposes, there was no difference between the manuals. The visual manual with full-screen captures and the textual manual were both better for learning than the visual manual with partial screen captures. We found no effect on user motivation. The tentative conclusion of this study is that screen captures seem not to be vital for learning or immediate use. If one opts for including screen captures, then the conclusion is that full-screen captures are better than partial one

    Guidelines for Effective Online Instruction Using Multimedia Screencasts

    Get PDF

    LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees

    Get PDF
    Systems based on artificial intelligence and machine learning models should be transparent, in the sense of being capable of explaining their decisions to gain humans' approval and trust. While there are a number of explainability techniques that can be used to this end, many of them are only capable of outputting a single one-size-fits-all explanation that simply cannot address all of the explainees' diverse needs. In this work we introduce a model-agnostic and post-hoc local explainability technique for black-box predictions called LIMEtree, which employs surrogate multi-output regression trees. We validate our algorithm on a deep neural network trained for object detection in images and compare it against Local Interpretable Model-agnostic Explanations (LIME). Our method comes with local fidelity guarantees and can produce a range of diverse explanation types, including contrastive and counterfactual explanations praised in the literature. Some of these explanations can be interactively personalised to create bespoke, meaningful and actionable insights into the model's behaviour. While other methods may give an illusion of customisability by wrapping, otherwise static, explanations in an interactive interface, our explanations are truly interactive, in the sense of allowing the user to "interrogate" a black-box model. LIMEtree can therefore produce consistent explanations on which an interactive exploratory process can be built

    What do faculties specializing in brain and neural sciences think about, and how do they approach, brain-friendly teaching-learning in Iran?

    Get PDF
    Objective: to investigate the perspectives and experiences of the faculties specializing in brain and neural sciences regarding brain-friendly teaching-learning in Iran. Methods: 17 faculties from 5 universities were selected by purposive sampling (2018). In-depth semi-structured interviews with directed content analysis were used. Results: 31 sub-subcategories, 10 subcategories, and 4 categories were formed according to the “General teaching model”. “Mentorship” was a newly added category. Conclusions: A neuro-educational approach that consider the roles of the learner’s brain uniqueness, executive function facilitation, and the valence system are important to learning. Such learning can be facilitated through cognitive load considerations, repetition, deep questioning, visualization, feedback, and reflection. The contextualized, problem-oriented, social, multi-sensory, experiential, spaced learning, and brain-friendly evaluation must be considered. Mentorship is important for coaching and emotional facilitation

    Is perception cognitively penetrable? A philosophically satisfying and empirically testable reframing

    Get PDF
    The question of whether perception can be penetrated by cognition is in the limelight again. The reason this question keeps coming up is that there is so much at stake: Is it possible to have theory-neutral observation? Is it possible to study perception without recourse to expectations, context, and beliefs? What are the boundaries between perception, memory, and inference (and do they even exist)? Are findings from neuroscience that paint a picture of perception as an inherently bidirectional and interactive process relevant for understanding the relationship between cognition and perception? We have assembled a group of philosophers and psychologists who have been considering the thesis of cognitive (im)penetrability in light of these questions (Abdel Rahman & Sommer, 2008; Goldstone, Landy, & Brunel, 2011; Lupyan, Thompson-Schill, & Swingley, 2010; Macpherson, 2012; Stokes, 2011). Rather than rehashing previous arguments which appear, in retrospect, to have been somewhat ill-posed (Pylyshyn, 1999), this symposium will present a thesis of cognitive (im)penetrability that is at once philosophically satisfying, empirically testable, and relevant to the questions that cognitive scientists find most interesting

    Why do These Match? Explaining the Behavior of Image Similarity Models

    Full text link
    Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce Salient Attributes for Network Explanation (SANE) to explain image similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classification score. In this task, an explanation depends on both of the input images, so standard methods do not apply. Our SANE explanations pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations provide additional information not typically captured by saliency maps alone, and can also improve performance on the classic task of attribute recognition. Our approach's ability to generalize is demonstrated on two datasets from diverse domains, Polyvore Outfits and Animals with Attributes 2. Code available at: https://github.com/VisionLearningGroup/SANEComment: Accepted at ECCV 202
    • 

    corecore