Abstract—In this paper, we present an image parsing to text description (I2T) framework that generates text descriptions of image and video content based on image understanding. The proposed I2T framework follows three steps: 1) Input images (or video frames) are decomposed into their constituent visual patterns by an image parsing engine, in a spirit similar to parsing sentences in natural language. 2) The image parsing results are converted into semantic representation in the form of Web Ontology Language (OWL), which enables seamless integration with general knowledge bases. 3) A text generation engine converts the results from previous steps into semantically meaningful, human readable and query-able text reports. The centerpiece of the I2T framework is an And-or Graph (AoG) visual knowledge representation, which provides a graphical representation serving as prior knowledge for representing diverse visual patterns and provides top-down hypotheses during the image parsing. The AoG embodies vocabularies of visual elements including primitives, parts, objects, scenes as well as a stochastic image grammar that specifies syntactic relations (i.e. compositional) and semantic relations (e.g. categorical, spatial, temporal and functional) between these visual elements. Therefore, the AoG is a unified model of both categorical and symbolic representation of visual knowledge. The proposed I2T framework has two objectives. First, we use semi-automatic method to parse images from the Internet in order to build an AoG for visual knowledge representation. Our goal is to make the parsing process more and more automatic using the learned AoG model. Second, we use automatic methods to parse image/video in specific domains and generate text reports that are useful for real-world applications. In the case studies at the end of this paper, we demonstrate two automatic I2T systems: a maritime and urban scene video surveillance system and a real-time automatic driving scene understanding system
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.