1 research outputs found

    On the Possibility of Rewarding Structure Learning Agents: Mutual Information on Linguistic Random Sets

    Full text link
    We present a first attempt to elucidate a theoretical and empirical approach to design the reward provided by a natural language environment to some structure learning agent. To this end, we revisit the Information Theory of unsupervised induction of phrase-structure grammars to characterize the behavior of simulated actions modeled as set-valued random variables (random sets of linguistic samples) constituting semantic structures. Our results showed empirical evidence of that simulated semantic structures (Open Information Extraction triplets) can be distinguished from randomly constructed ones by observing the Mutual Information among their constituents. This suggests the possibility of rewarding structure learning agents without using pretrained structural analyzers (oracle actors/experts).Comment: Paper accepted to the Workshop on Sets & Partitions (NeurIPS 2019, Vancouver, Canada
    corecore