Article thumbnail

Assessing the usefulness of google books' word frequencies for psycholinguistic research on word processing

By Marc Brysbaert, Emmanuel Keuleers and Boris New

Abstract

In this Perspective Article we assess the usefulness of Google’s new word frequencies for word recognition research (lexical decision and word naming). We find that, despite the massive corpus on which the Google estimates are based (131 billion words from books published in the United States alone), the Google American English frequencies explain 11% less of the variance in the lexical decision times from the English Lexicon Project (Balota et al., 2007) than the SUBTLEX-US word frequencies, based on a corpus of 51 million words from film and television subtitles. Further analyses indicate that word frequencies derived from recent books (published after 2000) are better predictors of word processing times than frequencies based on the full corpus, and that word frequencies based on fiction books predict word processing times better than word frequencies based on the full corpus. The most predictive word frequencies from Google still do not explain more of the variance in word recognition times of undergraduate students and old adults than the subtitle-based word frequencies

Topics: Social Sciences, lexical decision, word frequency, Google Books ngrams, SUBTLEX
Publisher: 'Frontiers Media SA'
Year: 2011
DOI identifier: 10.3389/fpsyg.2011.00027
OAI identifier: oai:archive.ugent.be:1852216
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • https://biblio.ugent.be/public... (external link)
  • http://dx.doi.org/10.3389/fpsy... (external link)
  • http://hdl.handle.net/1854/LU-... (external link)
  • https://biblio.ugent.be/public... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.