Skip to main content
Article thumbnail
Location of Repository

A Computer Vision Integration Model for a Multi-modal Cognitive System

By Alen Vrevcko, Danijel Skovcaj, Nick Hawes and Aleš Leonardis


We present a general method for integrating visual components into a multi-modal cognitive system. The integration is very generic and can combine an arbitrary set of modalities. We illustrate our integration approach with a specific instantiation of the architecture schema that focuses on integration of vision and language: a cognitive system able to collaborate with a human, learn and display some understanding of its surroundings. As examples of cross-modal interaction we describe mechanisms for clarification and visual learning

Topics: QA75 Electronic computers. Computer science
Year: 2009
OAI identifier:

Suggested articles

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.