Human-Robot interaction is an area full of challenges for artificial intelligence: dynamic, partially unknown environments that are not originally designed for autonomous machines; a large variety of situations and objects to deal with, with possibly complex semantics; physical interactions with humans that requires fine, low-latency control, representation and management of several mental models, pertinent situation assessment skills...the list goes on. This article sheds light on some key decisional issues that are to be tackled for a cognitive robot to share space and tasks with a human, and present our take on these challenges. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. These cognitive abilities cover geometric reasoning and situation assessment mainly based on perspective-taking and affordances, management and exploitation of each agent (human and robot) knowledge in separate cognitive models, natural multi-modal communication, "human-aware" task planning, and human and robot interleaved plan achievement. We present our design choices, the articulations between the diverse deliberative components of the robot, experimental results, and eventually discuss the strengths and weaknesses of our approach. It appears that explicit knowledge management, both symbolic and geometric, proves to be key as it pushes for a different, more semantic way to address the decision-making issue in human-robot interactions