9 research outputs found

    Constructive Technology Assessment:STS for and with Technology Actors

    Get PDF
    Over the years, STS has more and more moved from a predominant analytical gaze to engaging with the very fields and processes it is concerned with. At the University of Twente, STePS researchers have early on embarked on this road, with a key strand having evolved under the heading of Constructive Technology Assessment (CTA). While the core ideas were developed 30 years ago (Schot and Rip, 1997; Rip et al., 1995; Rip et al., 1987), the practical approaches and specific aims have clearly developed over time and – we expect – will continue to do so in the future. In what follows, we want to briefly explain the key characteristics of the approach, report on some recent projects and discuss our current attempts to move CTA from the field level to the work floor of researchers and technology actors, and close with an outlook on further directions for developing the approach

    Large-scale zero-shot learning in the wild: classifying zoological illustrations

    Get PDF
    In this paper we analyse the classification of zoological illustrations. Historically, zoological illustrations were the modus operandi for the documentation of new species, and now serve as crucial sources for long-term ecological and biodiversity research. By employing computational methods for classification, the data can be made amenable to research. Automated species identification is challenging due to the long-tailed nature of the data, and the millions of possible classes in the species taxonomy. Success commonly depends on large training sets with many examples per class, but images from only a subset of classes are digitally available, and many images are unlabelled, since labelling requires domain expertise. We explore zero-shot learning to address the problem, where features are learned from classes with medium to large samples, which are then transferred to recognise classes with few or no training samples. We specifically explore how distributed, multi-modal background knowledge from data providers, such as the Global Biodiversity Information Facility (GBIF), iNaturalist, and the Biodiversity Heritage Library (BHL), can be used to share knowledge between classes for zero-shot learning. We train a prototypical network for zero-shot classification, and introduce fused prototypes (FP) and hierarchical prototype loss (HPL) to optimise the model. Finally, we analyse the performance of the model for use in real-world applications. The experimental results are encouraging, indicating potential for use of such models in an expert support system, but also express the difficulty of our task, showing a necessity for research into computer vision methods that are able to learn from small samples.Computer Systems, Imagery and Medi

    Compound Histories:Materials, Governance and Production, 1760-1840

    Get PDF
    Compound Histories: Materials, Governance and Production, 1760-1840 explores the intertwined realms of production, governance and materials, placing chemists and chemistry at the center of processes most closely identified with the construction of the modern world

    Knowledge extraction from archives of natural history collections

    Get PDF
    Natural history collections provide invaluable sources for researchers with different disciplinary backgrounds, aspiring to study the geographical distribution of flora and fauna across the globe as well as other evolutionary processes. They are of paramount importance for mapping out long-term changes: from culture, to ecology, to how natural history is practiced.This thesis describes computational methods for knowledge extraction from archives of natural history collections---here referring to handwritten manuscripts and hand-drawn illustrations. As we are dealing with heterogeneous real-world data, the task becomes exceptionally challenging. Small samples and a long-tailed distribution, sometimes with very fine-grained distinctions between classes, hamper model learning. Prior knowledge is therefore needed to bootstrap the learning process. Moreover, archival content can be difficult to interpret and integrate, and should therefore be formally described for data integration within and across collections. By serving extracted knowledge to the Semantic Web, collections are made amenable for research and integration with other biodiversity resources on the Web. This work is supported by the Netherlands Organisation for Scientific Research (NWO) and Brill publishers, grant number 652.001.001 (the Making Sense of Illustrated Handwritten Archives project).Algorithms and the Foundations of Software technolog
    corecore