Recent work has shown that it is possible to take brain images of a subject acquired while they saw a scene and reconstruct an approximation of that scene from the images. Here we show that it is also possible to generate _text_ from brain images. We began with images collected as participants read names of objects (e.g., ``Apartment'). Without accessing information about the object viewed for an individual image, we were able to generate from it a collection of semantically pertinent words (e.g., "door," "window"). Across images, the sets of words generated overlapped consistently with those contained in articles about the relevant concepts from the online encyclopedia Wikipedia. The technique described, if developed further, could offer an important new tool in building human computer interfaces for use in clinical settings