11,723 research outputs found

    Compositional coding capsule network with k-means routing for text classification

    Full text link
    Text classification is a challenging problem which aims to identify the category of texts. Recently, Capsule Networks (CapsNets) are proposed for image classification. It has been shown that CapsNets have several advantages over Convolutional Neural Networks (CNNs), while, their validity in the domain of text has less been explored. An effective method named deep compositional code learning has been proposed lately. This method can save many parameters about word embeddings without any significant sacrifices in performance. In this paper, we introduce the Compositional Coding (CC) mechanism between capsules, and we propose a new routing algorithm, which is based on k-means clustering theory. Experiments conducted on eight challenging text classification datasets show the proposed method achieves competitive accuracy compared to the state-of-the-art approach with significantly fewer parameters

    How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on Continual Learning and Functional Composition

    Full text link
    A major goal of artificial intelligence (AI) is to create an agent capable of acquiring a general understanding of the world. Such an agent would require the ability to continually accumulate and build upon its knowledge as it encounters new experiences. Lifelong or continual learning addresses this setting, whereby an agent faces a continual stream of problems and must strive to capture the knowledge necessary for solving each new task it encounters. If the agent is capable of accumulating knowledge in some form of compositional representation, it could then selectively reuse and combine relevant pieces of knowledge to construct novel solutions. Despite the intuitive appeal of this simple idea, the literatures on lifelong learning and compositional learning have proceeded largely separately. In an effort to promote developments that bridge between the two fields, this article surveys their respective research landscapes and discusses existing and future connections between them

    Conformity, deformity and reformity

    Get PDF
    In any given field of artistic practice, practitioners position themselves—or find themselves positioned—according to interests and allegiances with specific movements, genres, and traditions. Selecting particular frameworks through which to approach the development of new ideas, patterns and expressions, balance is invariably maintained between the desire to contribute towards and connect with a particular set of domain conventions, whilst at the same time developing distinction and recognition as a creative individual. Creativity through the constraints of artistic domain, discipline and style provides a basis for consideration of notions of originality in the context of activity primarily associated with reconfiguration, manipulation and reorganisation of existing elements and ideas. Drawing from postmodern and post-structuralist perspectives in the analysis of modern hybrid art forms and the emergence of virtual creative environments, the transition from traditional artistic practice and notions of craft and creation, to creative spaces in which elements are manipulated, mutated, combined and distorted with often frivolous or subversive intent are considered. This paper presents an educational and musically focused perspective of the relationship between the individual and domain-based creative practice. Drawing primarily from musical and audio-visual examples with particular interest in creative disruption of pre-existing elements, creative strategies of appropriation and recycling are explored in the context of music composition and production. Conclusions focus on the interpretation of creativity as essentially a process of recombination and manipulation and highlight how the relationship between artist and field of practice creates unique creative spaces through which new ideas emerge

    Studying Generalization on Memory-Based Methods in Continual Learning

    Full text link
    One of the objectives of Continual Learning is to learn new concepts continually over a stream of experiences and at the same time avoid catastrophic forgetting. To mitigate complete knowledge overwriting, memory-based methods store a percentage of previous data distributions to be used during training. Although these methods produce good results, few studies have tested their out-of-distribution generalization properties, as well as whether these methods overfit the replay memory. In this work, we show that although these methods can help in traditional in-distribution generalization, they can strongly impair out-of-distribution generalization by learning spurious features and correlations. Using a controlled environment, the Synbol benchmark generator (Lacoste et al., 2020), we demonstrate that this lack of out-of-distribution generalization mainly occurs in the linear classifier
    • …
    corecore