People enjoy food photography because they appreciate food. Behind each meal
there is a story described in a complex recipe and, unfortunately, by simply
looking at a food image we do not have access to its preparation process.
Therefore, in this paper we introduce an inverse cooking system that recreates
cooking recipes given food images. Our system predicts ingredients as sets by
means of a novel architecture, modeling their dependencies without imposing any
order, and then generates cooking instructions by attending to both image and
its inferred ingredients simultaneously. We extensively evaluate the whole
system on the large-scale Recipe1M dataset and show that (1) we improve
performance w.r.t. previous baselines for ingredient prediction; (2) we are
able to obtain high quality recipes by leveraging both image and ingredients;
(3) our system is able to produce more compelling recipes than retrieval-based
approaches according to human judgment. We make code and models publicly
available.Comment: CVPR 201