33,184 research outputs found
Computer Science and Metaphysics: A Cross-Fertilization
Computational philosophy is the use of mechanized computational techniques to
unearth philosophical insights that are either difficult or impossible to find
using traditional philosophical methods. Computational metaphysics is
computational philosophy with a focus on metaphysics. In this paper, we (a)
develop results in modal metaphysics whose discovery was computer assisted, and
(b) conclude that these results work not only to the obvious benefit of
philosophy but also, less obviously, to the benefit of computer science, since
the new computational techniques that led to these results may be more broadly
applicable within computer science. The paper includes a description of our
background methodology and how it evolved, and a discussion of our new results.Comment: 39 pages, 3 figure
The computational complexity of density functional theory
Density functional theory is a successful branch of numerical simulations of
quantum systems. While the foundations are rigorously defined, the universal
functional must be approximated resulting in a `semi'-ab initio approach. The
search for improved functionals has resulted in hundreds of functionals and
remains an active research area. This chapter is concerned with understanding
fundamental limitations of any algorithmic approach to approximating the
universal functional. The results based on Hamiltonian complexity presented
here are largely based on \cite{Schuch09}. In this chapter, we explain the
computational complexity of DFT and any other approach to solving electronic
structure Hamiltonians. The proof relies on perturbative gadgets widely used in
Hamiltonian complexity and we provide an introduction to these techniques using
the Schrieffer-Wolff method. Since the difficulty of this problem has been well
appreciated before this formalization, practitioners have turned to a host
approximate Hamiltonians. By extending the results of \cite{Schuch09}, we show
in DFT, although the introduction of an approximate potential leads to a
non-interacting Hamiltonian, it remains, in the worst case, an NP-complete
problem.Comment: Contributed chapter to "Many-Electron Approaches in Physics,
Chemistry and Mathematics: A Multidisciplinary View
Lifted rule injection for relation embeddings
Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks. Yet, a major challenge is how to efficiently incorporate commonsense knowledge into such models. A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. However, propositionalization does not scale beyond domains with only few entities and rules. In this paper we present a highly efficient method for incorporating implication rules into distributed representations for automated knowledge base construction. We map entity-tuple embeddings into an approximately Boolean space and encourage a partial ordering over relation embeddings based on implication rules mined from WordNet. Surprisingly, we find that the strong restriction of the entity-tuple embedding space does not hurt the expressiveness of the model and even acts as a regularizer that improves generalization. By incorporating few commonsense rules, we achieve an increase of 2 percentage points mean average precision over a matrix factorization baseline, while observing a negligible increase in runtime
Evaluating the Representational Hub of Language and Vision Models
The multimodal models used in the emerging field at the intersection of
computational linguistics and computer vision implement the bottom-up
processing of the `Hub and Spoke' architecture proposed in cognitive science to
represent how the brain processes and combines multi-sensory inputs. In
particular, the Hub is implemented as a neural network encoder. We investigate
the effect on this encoder of various vision-and-language tasks proposed in the
literature: visual question answering, visual reference resolution, and
visually grounded dialogue. To measure the quality of the representations
learned by the encoder, we use two kinds of analyses. First, we evaluate the
encoder pre-trained on the different vision-and-language tasks on an existing
diagnostic task designed to assess multimodal semantic understanding. Second,
we carry out a battery of analyses aimed at studying how the encoder merges and
exploits the two modalities.Comment: Accepted to IWCS 201
- …