Affordance detection is a challenging problem with a wide variety of robotic
applications. Traditional affordance detection methods are limited to a
predefined set of affordance labels, hence potentially restricting the
adaptability of intelligent robots in complex and dynamic environments. In this
paper, we present the Open-Vocabulary Affordance Detection (OpenAD) method,
which is capable of detecting an unbounded number of affordances in 3D point
clouds. By simultaneously learning the affordance text and the point feature,
OpenAD successfully exploits the semantic relationships between affordances.
Therefore, our proposed method enables zero-shot detection and can be able to
detect previously unseen affordances without a single annotation example.
Intensive experimental results show that OpenAD works effectively on a wide
range of affordance detection setups and outperforms other baselines by a large
margin. Additionally, we demonstrate the practicality of the proposed OpenAD in
real-world robotic applications with a fast inference speed (~100ms). Our
project is available at https://openad2023.github.io.Comment: Accepted to The 2023 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2023