Abstract Current uses of robots in classrooms are
reviewed and used to characterise four scenarios: (s1)
Robot as Classroom Teacher; (s2) Robot as Companion
and Peer; (s3) Robot as Care-eliciting Companion; and (s4)
Telepresence Robot Teacher. The main ethical concerns
associated with robot teachers are identified as: privacy;
attachment, deception, and loss of human contact; and
control and accountability. These are discussed in terms of
the four identified scenarios. It is argued that classroom
robots are likely to impact children’s’ privacy, especially
when they masquerade as their friends and companions,
when sensors are used to measure children’s responses, and
when records are kept. Social robots designed to appear as
if they understand and care for humans necessarily involve
some deception (itself a complex notion), and could
increase the risk of reduced human contact. Children could
form attachments to robot companions (s2 and s3), or robot
teachers (s1) and this could have a deleterious effect on
their social development. There are also concerns about the
ability, and use of robots to control or make decisions
about children’s behaviour in the classroom. It is concluded
that there are good reasons not to welcome fully fledged
robot teachers (s1), and that robot companions (s2 and 3)
should be given a cautious welcome at best. The limited
circumstances in which robots could be used in the classroom
to improve the human condition by offering otherwise
unavailable educational experiences are discussed