26 research outputs found

    Avoiding Catastrophic Forgetting in Visual Classification Using Human Concept Formation

    Full text link
    Deep neural networks have excelled in machine learning, particularly in vision tasks, however, they often suffer from catastrophic forgetting when learning new tasks sequentially. In this work, we propose Cobweb4V, a novel visual classification approach that builds on Cobweb, a human like learning system that is inspired by the way humans incrementally learn new concepts over time. In this research, we conduct a comprehensive evaluation, showcasing the proficiency of Cobweb4V in learning visual concepts, requiring less data to achieve effective learning outcomes compared to traditional methods, maintaining stable performance over time, and achieving commendable asymptotic behavior, without catastrophic forgetting effects. These characteristics align with learning strategies in human cognition, positioning Cobweb4V as a promising alternative to neural network approaches

    Improving fairness in machine learning systems: What do industry practitioners need?

    Full text link
    The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in Computing Systems (CHI 2019

    A Divergent Synthetic Approach to Diverse Molecular Scaffolds: Assessment of Lead-Likeness using LLAMA, an Open-Access Computational Tool

    Get PDF
    Complementary cyclisation reactions of hex-2-ene-1,6-diamine derivatives were exploited in the synthesis of alternative molecular scaffolds. The value of the synthetic approach was analysed using LLAMA, an open-access computational tool for assessing the lead-likeness and novelty of molecular scaffolds

    Computational Models of Human Learning: Applications for Tutor Development, Behavior Prediction, and Theory Testing

    No full text
    <p>Intelligent tutoring systems are effective for improving students’ learning outcomes (Bowen et al., 2013; Koedinger & Anderson, 1997; Pane et al., 2013). However, constructing tutoring systems that are pedagogically effective has been widely recognized as a challenging problem (Murray, 1999, 2003). In this thesis, I explore the use of computational models of apprentice learning, or computer models that learn interactively from examples and feedback, to support tutor development. In particular, I investigate their use for authoring expert-models via demonstrations and feedback (Matsuda et al., 2014), predicting student behavior within tutors (VanLehn et al., 1994), and for testing alternative learning theories (MacLellan, Harpstead, Patel, & Koedinger, 2016). To support these investigations, I present the Apprentice Learner Architecture, which posits the types of knowledge, performance, and learning components needed for apprentice learning and enables the generation and testing of alternative models. I use this architecture to create two models: the DECISION TREE model, which non- incrementally learns when to apply its skills, and the TRESTLE model, which instead learns incrementally. Both models both draw on the same small set of prior knowledge for all simulations (six operators and three types of relational knowledge). Despite their limited prior knowledge, I demonstrate their use for efficiently authoring a novel experimental design tutor and show that they are capable of achieving human-level performance in seven additional tutoring systems that teach a wide range of knowledge types (associations, categories, and skills) across multiple domains (language, math, engineering, and science). I show that the models are capable of predicting which versions of a fraction arithmetic and box and arrows tutors are more effective for human students’ learning. Further, I use a mixedeffects regression analysis to evaluate the fit of the models to the available human data and show that across all seven domains the TRESTLE model better fits the human data than the DECISION TREE model, supporting the theory that humans learn the conditions under which skills apply incrementally, rather than non-incrementally as prior work has suggested (Li, 2013; Matsuda et al., 2009). This work lays the foundation for the development of a Model Human Learner— similar to Card, Moran, and Newell’s (1986) Model Human Processor—that encapsulates psychological and learning science findings in a format that researchers and instructional designers can use to create effective tutoring systems.</p

    A Computational Aid for Problem Formulation in Early Conceptual Design

    No full text
    Conceptual design is a high-level cognitive activity that draws upon distinctive human mental abilities. An early and fundamental part of the design process is problem formulation, in which designers determine the structure of the problem space they will later search. Although many tools have been developed to aid the later stages of design, few tools exist that aid designers in the early stages. In this paper, we describe Problem Formulator, an interactive environment that focuses on this stage of the design process. This tool has representations and operations that let designers create, visualize, explore, and reflect on their formulations. Although this process remains entirely under the user’s control, these capabilities make the system well positioned to aid the early stages of conceptual design. [DOI: 10.1115/1.4024714

    Mahmoud Dinar Problem Map: An Ontological Framework for a Computational Study of Problem Formulation in Engineering Design

    No full text
    Studies of design cognition often face two challenges. One is a lack of formal cognitive models of design processes that have the appropriate granularity: fine enough to distinguish differences among individuals and coarse enough to detect patterns of similar actions. The other is the inadequacies in automating the recourse-intensive analyses of data collected from large samples of designers. To overcome these barriers, we have developed the problem map (P-maps) ontological framework. It can be used to explain design thinking through changes in state models that are represented in terms of requirements, functions, artifacts, behaviors, and issues. The different ways these entities can be combined, in addition to disjunctive relations and hierarchies, support detailed modeling and analysis of design problem formulation. A node-link representation of P-maps enables one to visualize how a designer formulates a problem or to compare how different designers formulate the same problem. Descriptive statistics and time series of entities provide more detailed comparisons. Answer set programming (ASP), a predicate logic formalism, is used to formalize and trace strategies that designers adopt. Data mining techniques (association rule and sequence mining) are used to search for patterns among large number of designers. Potential uses of P-maps are computer-assisted collection of large data sets for design research, development of a test for the problem formulation skill, and a tutoring system
    corecore