26,989 research outputs found

    Learning Transferable Adversarial Robust Representations via Multi-view Consistency

    Full text link
    Despite the success on few-shot learning problems, most meta-learned models only focus on achieving good performance on clean examples and thus easily break down when given adversarially perturbed samples. While some recent works have shown that a combination of adversarial learning and meta-learning could enhance the robustness of a meta-learner against adversarial attacks, they fail to achieve generalizable adversarial robustness to unseen domains and tasks, which is the ultimate goal of meta-learning. To address this challenge, we propose a novel meta-adversarial multi-view representation learning framework with dual encoders. Specifically, we introduce the discrepancy across the two differently augmented samples of the same data instance by first updating the encoder parameters with them and further imposing a novel label-free adversarial attack to maximize their discrepancy. Then, we maximize the consistency across the views to learn transferable robust representations across domains and tasks. Through experimental validation on multiple benchmarks, we demonstrate the effectiveness of our framework on few-shot learning tasks from unseen domains, achieving over 10\% robust accuracy improvements against previous adversarial meta-learning baselines.Comment: *Equal contribution (Author ordering determined by coin flip). NeurIPS SafetyML workshop 2022, Under revie

    Co-Constructing Writing Knowledge: Students’ Collaborative Talk Across Contexts

    Get PDF
    Although compositionists recognize that student talk plays an important role in learning to write, there is limited understanding of how students use conversational moves to collaboratively build knowledge about writing across contexts. This article reports on a study of focus group conversations involving first-year students in a cohort program. Our analysis identified two patterns of group conversation among students: “co-telling” and “co-constructing,” with the latter leading to more complex writing knowledge. We also used Beaufort’s domains of writing knowledge to examine how co-constructing conversations supported students in abstracting knowledge beyond a single classroom context and in negotiating local constraints. Our findings suggest that co-constructing is a valuable process that invites students to do the necessary work of remaking their knowledge for local use. Ultimately, our analysis of the role of student conversation in the construction of writing knowledge contributes to our understanding of the myriad activities that surround transfer of learning

    Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective

    Get PDF
    This paper takes a problem-oriented perspective and presents a comprehensive review of transfer learning methods, both shallow and deep, for cross-dataset visual recognition. Specifically, it categorises the cross-dataset recognition into seventeen problems based on a set of carefully chosen data and label attributes. Such a problem-oriented taxonomy has allowed us to examine how different transfer learning approaches tackle each problem and how well each problem has been researched to date. The comprehensive problem-oriented review of the advances in transfer learning with respect to the problem has not only revealed the challenges in transfer learning for visual recognition, but also the problems (e.g. eight of the seventeen problems) that have been scarcely studied. This survey not only presents an up-to-date technical review for researchers, but also a systematic approach and a reference for a machine learning practitioner to categorise a real problem and to look up for a possible solution accordingly
    • …
    corecore