469 research outputs found

    Quality-Aware Prototype Memory for Face Representation Learning

    Full text link
    Prototype Memory is a powerful model for face representation learning. It enables the training of face recognition models using datasets of any size, with on-the-fly generation of prototypes (classifier weights) and efficient ways of their utilization. Prototype Memory demonstrated strong results in many face recognition benchmarks. However, the algorithm of prototype generation, used in it, is prone to the problems of imperfectly calculated prototypes in case of low-quality or poorly recognizable faces in the images, selected for the prototype creation. All images of the same person, presented in the mini-batch, used with equal weights, and the resulting averaged prototype could be contaminated with imperfect embeddings of such face images. It can lead to misdirected training signals and impair the performance of the trained face recognition models. In this paper, we propose a simple and effective way to improve Prototype Memory with quality-aware prototype generation. Quality-Aware Prototype Memory uses different weights for images of different quality in the process of prototype generation. With this improvement, prototypes get more valuable information from high-quality images and less hurt by low-quality ones. We propose and compare several methods of quality estimation and usage, perform extensive experiments on the different face recognition benchmarks and demonstrate the advantages of the proposed model compared to the basic version of Prototype Memory.Comment: Preprin

    Decoding astronomical spectra using machine learning

    Get PDF
    Spectroscopy is one of the cornerstones of modern astronomy. Using spectra, the light from far-away objects measured on Earth can be related back to the physical and chemical conditions of the astronomical matter from which it is emitted. This makes spectroscopy an essential tool for constraining the physical and chemical conditions of the matter in stars, gas, galaxies and all other types of astronomical objects. However, whilst spectra carry a wealth of astronomical information, their analysis is often complicated by difficulties such as degeneracies between input parameters and gaps in our theoretical knowledge. In this thesis, we look towards the rapidly growing field of machine learning as a means of better extracting the information content of astronomical spectra. Chapters 2 and 3 of the thesis are dedicated to the study of spectra originating from the interstellar medium. Chapter 2 of this thesis presents a machine learning emulator for the UCLCHEM astrochemical code which when combined with a Bayesian treatment of the radiative-transfer inverse problem enables a rigorous handling of the degeneracies affecting molecular lines (all within short enough computational timescales to be tractable). Chapter 3 extends upon the work of Chapter 2 on modelling molecular lines and investigates the appropriateness of Non-negative Matrix Factorization, a blind source separation algorithm, for the task of unmixing the gas phases which may exist within molecular line-intensity maps. Chapter 4 and 5 are concerned with the analysis of stellar spectra. In these chapters, we introduce machine learning approaches for extracting the chemical content from stellar spectra which do not rely on manual spectral modelling. This removes the burden of building faithful forward-models of stellar spectroscopy in order to precisely extract the chemistry of stars. The two approaches are also complimentary. Chapter 4 presents a deep-learning approach for distilling the information content within stellar spectra into a representation where undesirable factors of variation are excluded. Such a representation can then be used to directly find chemically identical stars or for differential abundance analysis. However, the approach requires measurements of the to-be-excluded undesirable factors of variation. The second approach which is presented in Chapter 5 addresses this shortcoming by learning which factors of variation should be excluded using spectra of open clusters. However, because of the low number of known open clusters, whilst the method constructed in Chapter 4 is non-linear and parametrized by a feedforward neural network, the approach presented in Chapter 5 was made linear

    The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism

    Full text link
    Computer vision and other biometrics data science applications have commenced a new project of profiling people. Rather than using 'transaction generated information', these systems measure the 'real world' and produce an assessment of the 'world state' - in this case an assessment of some individual trait. Instead of using proxies or scores to evaluate people, they increasingly deploy a logic of revealing the truth about reality and the people within it. While these profiling knowledge claims are sometimes tentative, they increasingly suggest that only through computation can these excesses of reality be captured and understood. This article explores the bases of those claims in the systems of measurement, representation, and classification deployed in computer vision. It asks if there is something new in this type of knowledge claim, sketches an account of a new form of computational empiricism being operationalised, and questions what kind of human subject is being constructed by these technological systems and practices. Finally, the article explores legal mechanisms for contesting the emergence of computational empiricism as the dominant knowledge platform for understanding the world and the people within it

    Beyond teacher transmission and Googling information to more creative language learning

    Get PDF
    As information is readily available online, students can Google and Google Translate perhaps challenging a teacher’s classroom role and language awareness. The safety of being a transmitter of knowledge and instructor may be threatened and classrooms could become fossilized as the instant nature of information grows. This paper will argue that by facilitating modern language learning we could also stimulate questioning. Tasks may need to include appropriate on line searching and a creative inquiry approach to using language. This suggests reworking teacher dominance into a facilitator’s role. Digital natives and the Facebook/Instagram/Snapchat generation may want a culture of learning which embraces information flows, while providing tools for English language learning and use in the modern era. Suggestions for engaging learners with contemporary techniques will be shared, even for classrooms in which there is not readily available connectivity

    Ego-Alter Ego

    Get PDF
    German Poetic Realists drew on the Romantic motif of the Double in a manner consistent with the central dictum of Poetic Realism as articulated by its chief theorists, Julian Schmidt and Otto Ludwig. Schmidt and Ludwig argued that contemporary authors should, above all, strive for psychological and aesthetic totality in their narrative representations, turning away from the Romantic fantastic but also avoiding the fragmentary approach to the portrayal of everyday life that Ludwig found in early Naturalism. The 'poetic' presentation of reality adheres to quotidian life but strives to show it in all its many dimensions. While Romantic Doppelgänger are often preternatural figures, the Poetic Realists configure egos and their narrative Others ('alter egos,' who are also sometimes physical Doubles) to portray characters in their psychological comprehensiveness. After offering an overview of the Romantic Double motif and its connections to the theory of Poetic Realism, John Pizer analyzes the work of Annette von Droste-Hülshoff, Otto Ludwig, Conrad Ferdinand Meyer, Gottfried Keller, Theodor Storm, and Wilhelm Raabe
    corecore