Location of Repository

Inverting and visualizing features for object detection. arXiv preprint arXiv:1212.2278

By Carl Vondrick, Aditya Khosla, Tomasz Malisiewicz and Antonio Torralba

Abstract

We introduce algorithms to visualize feature spaces used by object detectors. The tools in this paper allow a human to put on ‘HOG goggles ’ and perceive the visual world as a HOG based object detector sees it. We found that these visualizations allow us to analyze object detection systems in new ways and gain new insight into the detector’s failures. For example, when we visualize the features for high scoring false alarms, we discovered that, although they are clearly wrong in image space, they do look deceptively similar to true positives in feature space. This result suggests that many of these false alarms are caused by our choice of feature space, and indicates that creating a better learning algorithm or building bigger datasets is unlikely to correct these errors. By visualizing feature spaces, we can gain a more intuitive understanding of our detection systems. Figure 1: An image from PASCAL and a high scoring car detection from DPM [8]. Why did the detector fail? 1

Year: 2012
OAI identifier: oai:CiteSeerX.psu:10.1.1.362.9730
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://web.mit.edu/vondrick/ih... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.