909 research outputs found

    How easy are audio descriptions? Take 2

    Get PDF
    Easy-to-understand language has traditionally been used for written content, but there has been a recent interest in applying this concept to audiovisual content and access services. In this regard, the EASIT project addressed whether the hybridisation of easy-to-understand language with audio description could produce a new access service, following the path initiated by Pilar Orero and Rocío Bernabé-Caro. Professionals from both audio description and easy-to-understand language held diverging views on the topic, but one central aspect remained to be investigated: how easy are current audio descriptions? In the last Languages and the Media conference we approached the topic and focused on a corpus of Catalan audio descriptions to reply to this question. In Languages and the Media 2022, we propose to take a step forward and present the analysis of a corpus of audio descriptions in English and Spanish, to assess to what extent current audio descriptions are already easy to understand, i.e. to what extent they share the principles of easy-to-understand language, as described in the ISO standard 23859-1. This descriptive study will shed some light on current practices with a cross-linguistic perspective and will allow us to identify commonalities and divergences between easy-to-understand language and audio description. Additionally, it will contribute to the development of the so-called concept of "easy audios", an innovative approach that caters for the needs of those who may have difficulties understanding audiovisual content

    Coupled anharmonic oscillators: the Rayleigh-Ritz approach versus the collocation approach

    Full text link
    For a system of coupled anharmonic oscillators we compare the convergence rate of the variational collocation approach presented recently by Amore and Fernandez (2010 Phys.Scr.81 045011) with the one obtained using the optimized Rayleigh-Ritz (RR) method. The monotonic convergence of the RR method allows us to obtain more accurate results at a lower computational cost.Comment: 7 pages, 1 figur

    Critical Reflections in STEM Education

    Get PDF
    The purpose of this course is to foster abilities to teach, assess, and critically reflect on STEM learning that supports authentic engagement in interdisciplinary design and inquiry. Students will engage in making connections to STEM research literature with learning and teaching practice. Field placement in a K-5 learning environment is required for this course, which is typically fulfilled through a candidate’s full time teaching position. Other arrangements are permitted but not provided. This placement is the responsibility of the candidate

    Risk factors for anxiety and depression among pregnant women during the COVID-19 pandemic: Results of a web-based multinational cross-sectional study.

    Get PDF
    Objective To assess risk factors for anxiety and depression among pregnant women during the COVID-19 pandemic using Mind-COVID, a prospective cross-sectional study that compares outcomes in middle-income economies and high-income economies. Methods A total of 7102 pregnant women from 12 high-income economies and nine middle-income economies were included. The web-based survey used two standardized instruments, General Anxiety Disorder-7 (GAD-7) and Patient Health Questionnaire–9 (PHQ-9). Result Pregnant women in high-income economies reported higher PHQ-9 (0.18 standard deviation [SD], P < 0.001) and GAD-7 (0.08 SD, P = 0.005) scores than those living in middle-income economies. Multivariate regression analysis showed that increasing PHQ-9 and GAD-7 scales were associated with mental health problems during pregnancy and the need for psychiatric treatment before pregnancy. PHQ-9 was associated with a feeling of burden related to restrictions in social distancing, and access to leisure activities. GAD-7 scores were associated with a pregnancy-related complication, fear of adverse outcomes in children related to COVID-19, and feeling of burden related to finances. Conclusions According to this study, the imposed public health measures and hospital restrictions have left pregnant women more vulnerable during these difficult times. Adequate partner and family support during pregnancy and childbirth can be one of the most important protective factors against anxiety and depression, regardless of national economic status.pre-print2752 K

    Focus! rating XAI methods and finding biases

    Get PDF
    Explainability has become a major topic of research in Artificial Intelligence (AI), aimed at increasing trust in models such as Deep Learning (DL) networks. However, trustworthy models cannot be achieved with explainable AI (XAI) methods unless the XAI methods themselves can be trusted. To evaluate XAI methods one may assess interpretability, a qualitative measure of how understandable an explanation is to humans [1]. While this is important to guarantee the proper interaction between humans and the model, interpretability generally involves end-users in the process [2], inducing strong biases. In fact, a qualitative evaluation alone cannot guarantee coherency to reality (i.e., model behavior), as false explanations can be more interpretable than accurate ones. To enable trust on XAI methods, we also need quantitative and objective evaluation metrics, which validate the relation between the explanations produced by the XAI method and the behavior of the trained model under assessment. In this work we propose a novel evaluation score for feature attribution methods, described in §I-A. Our input alteration approach induces in-distribution noise into samples, that is, alterations on the input which correspond to visual patterns found within the original data distribution. To do so we modify the context of the sample instead of the content, leaving the original pixels values untouched. In practice, we create a new sample, composed of samples of different classes, which we call a mosaic image (see examples in Figure 2). Using mosaics as input has a major benefit: each input quadrant is an image from the original distribution, producing blobs of activations in each quadrant which are consequently coherent. Only the pixels forming the borders between images, and the few corresponding activations, may be considered out of distribution. By inducing in-distribution noise, mosaic images introduce a problem in which XAI methods may objectively err (focus on something it should not be focusing on). On those composed mosaics we ask a XAI method to provide explanation for just one of the contained classes, and follow its response. Then, we measure how much of the explanation generated by the XAI is located on the areas corresponding to the target class, quantifying it through the Focus score. This score allows us to compare methods in terms of explanation precision, evaluating the capability of XAI methods to provide explanations related to the requested class. Using mosaics has another benefit. Since the noise introduced is in-distribution, the explanation errors identify and exemplify biases of the model. This facilitates the elimination of biases in models and datasets, potentially resulting in more reliable solutions. We illustrate how to do so in §I-C

    Learning to Teach Elementary Students to Construct Evidence-Based Claims of Natural Phenomena.

    Full text link
    Engaging in science practices integrated with content facilitates deeper learning of science and is called for by new reforms. Supporting this science learning requires complex teaching that is not common in U.S. classrooms. Given this complexity, beginning elementary teachers need support in learning to engage students in science practices such as constructing evidence-based claims about natural phenomena. A practice-based approach to teacher education, focused on making teaching practice core to professional learning, has been suggested to support beginning teacher development. This approach has shown potential in supporting secondary science teachers’ learning, yet little is known about how it might support preservice elementary teachers’ learning over time. This dissertation addresses this gap by investigating the change in preservice teachers’ teaching practices and knowledge for supporting elementary students in constructing evidence-based claims during a practice-based elementary teacher education program. Using longitudinal qualitative methodology, this study drew on video-records, lesson plans, class assignments, and surveys from one cohort of 54 interns enrolled in a two-year coherent practice-based teacher education program. A subset of five focal interns was followed closely throughout the program. The preservice teachers grew in their ability to support elementary students to construct evidence-based claims incrementally by adding components of the teaching practice over time. Specifically, the teachers typically developed the ability to support students to analyze data earlier than they developed the ability to support students to justify their claims. However, they faced challenges during student teaching in consistently supporting students to construct evidence-based claims. These challenges may be due to the removal of scaffolding in the face of the complexity of fulltime teaching. The findings highlight the potential of a coherent practice-based approach to teacher education. For example, the preservice teachers seemed to draw on courses from across the program in developing their teaching practice. These findings also provide new insights into how teachers learn a teaching practice over time and the factors that influence this learning, such as tools for planning science lessons. The analyses underscore the need for development of and research on tools and scaffolds that might continue to support beginning teaching over time.PhDEducational StudiesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113423/1/aarias_1.pd

    Audio description in 360Âș content: results from a reception study

    Get PDF
    The ImAc project was the first European initiative aiming to propose and test the model of implementing access services in 360  videos, paving the way for future studies in the under-researched field of immersive accessibility. This article reports on the methodology and results of a pilot study and a small-scale reception study, conducted in the last months of the project. The results show a favourable reception of extended audio descriptions by AD users. They also indicate interest in the implementation of spatial sound in AD provided for 360  content, which could be tested in future reception studie

    Focus! Rating XAI methods and finding biases

    Get PDF
    AI explainability improves the transparency and trustworthiness of models. However, in the domain of images, where deep learning has succeeded the most, explainability is still poorly assessed. In the field of image recognition many feature attribution methods have been proposed with the purpose of explaining a model’s behavior using visual cues. However, no metrics have been established so far to assess and select these methods objectively. In this paper we propose a consistent evaluation score for feature attribution methods—the Focus—designed to quantify their coherency to the task. While most previous work adds outof-distribution noise to samples, we introduce a methodology to add noise from within the distribution. This is done through mosaics of instances from different classes, and the explanations these generate. On those, we compute a visual pseudo-precision metric, Focus. First, we show the robustness of the approach through a set of randomization experiments. Then we use Focus to compare six popular explainability techniques across several CNN architectures and classification datasets. Our results find some methods to be consistently reliable (LRP, GradCAM), while others produce class-agnostic explanations (SmoothGrad, IG). Finally we introduce another application of Focus, using it for the identification and characterization of biases found in models. This empowers bias-management tools, in another small step towards trustworthy AI.This work is supported by the European Union – H2020 Program under the “INFRAIA-01-2018-2019 – Integrating Activities for Advanced Communities”, Grant Agreement n.871042, “SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics” and by the Dept. de Recerca i Universitats of the Generalitat de Catalunya under the Industrial Doctorate Grant DI 2018-100.Peer ReviewedPostprint (author's final draft

    A confusion matrix for evaluating feature attribution methods

    Get PDF
    © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The increasing use of deep learning models in critical areas of computer vision and the consequent need for insights into model behaviour have led to the development of numerous feature attribution methods. However, these attributions must be both meaningful and plausible to end-users, which is not always the case. Recent research has emphasized the importance of faithfulness in attributions, as plausibility without faithfulness can result in misleading explanations and incorrect decisions. In this work., we propose a novel approach to evaluate the faithfulness of feature attribution methods by constructing an ‘Attribution Confusion Matrix’, which allows us to leverage a wide range of existing metrics from the traditional confusion matrix. This approach effectively introduces multiple evaluation measures for faithfulness in feature attribution methods in a unified and consistent framework. We demonstrate the effectiveness of our approach on various datasets, attribution methods, and models, emphasizing the importance of faithfulness in generating plausible and reliable explanations while also illustrating the distinct behaviour of different feature attribution methods.This work is conducted within the NL4XAI project which has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860621. This work is also supported by the Spanish Ministry of Science, Innovation and Universities (grants PID2021-123152OB-C21, TED2021-130295B-C33 and RED2022-134315-T) and the Galician Ministry of Culture, Education, Professional Training and University (grants ED431G2019/04 and ED431C2022/19). These grants were co-funded by the European Regional Development Fund (ERDF/FEDER program). This work is also supported by the European Union-Horizon 2020 Program under the scheme “INFRAIA-01-2018-2019 - Integrating Activities for Advanced Communities”, Grant Agreement n.871042, “SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics” (http://www.sobigdata.eu) and by the Departament de Recerca i Universitats of the Generalitat de Catalunya under the Industrial Doctorate Grant DI 2018-100.Peer ReviewedPostprint (author's final draft

    Focus! Rating XAI methods and finding biases

    Get PDF
    AI explainability improves the transparency and trustworthiness of models. However, in the domain of images, where deep learning has succeeded the most, explainability is still poorly assessed. In the field of image recognition many feature attribution methods have been proposed with the purpose of explaining a model’s behavior using visual cues. However, no metrics have been established so far to assess and select these methods objectively. In this paper we propose a consistent evaluation score for feature attribution methods—the Focus—designed to quantify their coherency to the task. While most previous work adds outof-distribution noise to samples, we introduce a methodology to add noise from within the distribution. This is done through mosaics of instances from different classes, and the explanations these generate. On those, we compute a visual pseudo-precision metric, Focus. First, we show the robustness of the approach through a set of randomization experiments. Then we use Focus to compare six popular explainability techniques across several CNN architectures and classification datasets. Our results find some methods to be consistently reliable (LRP, GradCAM), while others produce class-agnostic explanations (SmoothGrad, IG). Finally we introduce another application of Focus, using it for the identification and characterization of biases found in models. This empowers bias-management tools, in another small step towards trustworthy AI.This work is supported by the European Union – H2020 Program under the “INFRAIA-01-2018-2019 – Integrating Activities for Advanced Communities”, Grant Agreement n.871042, “SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics” and by the Dept. de Recerca i Universitats of the Generalitat de Catalunya under the Industrial Doctorate Grant DI 2018-100.Peer ReviewedPostprint (author's final draft
    • 

    corecore