5,129 research outputs found

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Investigation and development of a tangible technology framework for highly complex and abstract concepts

    Get PDF
    The ubiquitous integration of computer-supported learning tools within the educational domain has led educators to continuously seek effective technological platforms for teaching and learning. Overcoming the inherent limitations of traditional educational approaches, interactive and tangible computing platforms have consequently garnered increased interest in the pursuit of embedding active learning pedagogies within curricula. However, whilst Tangible User Interface (TUI) systems have been successfully developed to edutain children in various research contexts, TUI architectures have seen limited deployment towards more advanced educational pursuits. Thus, in contrast to current domain research, this study investigates the effectiveness and suitability of adopting TUI systems for enhancing the learning experience of abstract and complex computational science and technology-based concepts within higher educational institutions (HEI)s. Based on the proposal of a contextually apt TUI architecture, the research describes the design and development of eight distinct TUI frameworks embodying innovate interactive paradigms through tabletop peripherals, graphical design factors, and active tangible manipulatives. These computationally coupled design elements are evaluated through summative and formative experimental methodologies for their ability to aid in the effective teaching and learning of diverse threshold concepts experienced in computational science. In addition, through the design and adoption of a technology acceptance model for educational technology (TAM4Edu), the suitability of TUI frameworks in HEI education is empirically evaluated across a myriad of determinants for modelling students’ behavioural intention. In light of the statistically significant results obtained in both academic knowledge gain (ÎŒ = 25.8%) and student satisfaction (ÎŒ = 12.7%), the study outlines the affordances provided through TUI design for various constituents of active learning theories and modalities. Thus, based on an empirical and pedagogical analyses, a set of design guidelines is defined within this research to direct the effective development of TUI design elements for teaching and learning abstract threshold concepts in HEI adaptations

    Visual and Textual Programming Languages: A Systematic Review of the Literature

    Get PDF
    It is well documented, and has been the topic of much research, that Computer Science courses tend to have higher than average drop out rates at third level. This is a problem that needs to be addressed with urgency but also caution. The required number of Computer Science graduates is growing every year but the number of graduates is not meeting this demand and one way that this problem can be alleviated is to encourage students at an early age towards studying Computer Science courses. This paper presents a systematic literature review on the role of visual and textual programming languages when learning to program, particularly as a first programming language. The approach is systematic, in that a structured search of electronic resources has been conducted, and the results are presented and quantitatively analysed. This study will give insight into whether or not the current approaches to teaching young learners programming are viable, and examines what we can do to increase the interest and retention of these students as they progress through their education.Comment: 18 pages (including 2 bibliography pages), 3 figure

    RealitySketch: Embedding Responsive Graphics and Visualizations in AR through Dynamic Sketching

    Full text link
    We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of AR sketching tools enable users to draw and embed sketches in the real world. However, with the current tools, sketched contents are inherently static, floating in mid air without responding to the real world. This paper introduces a new way to embed dynamic and responsive graphics in the real world. In RealitySketch, the user draws graphical elements on a mobile AR screen and binds them with physical objects in real-time and improvisational ways, so that the sketched elements dynamically move with the corresponding physical motion. The user can also quickly visualize and analyze real-world phenomena through responsive graph plots or interactive visualizations. This paper contributes to a set of interaction techniques that enable capturing, parameterizing, and visualizing real-world motion without pre-defined programs and configurations. Finally, we demonstrate our tool with several application scenarios, including physics education, sports training, and in-situ tangible interfaces.Comment: UIST 202

    Interfaces for human-centered production and use of computer graphics assets

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    ISAR: Ein Autorensystem fĂŒr Interaktive Tische

    Get PDF
    Developing augmented reality systems involves several challenges, that prevent end users and experts from non-technical domains, such as education, to experiment with this technology. In this research we introduce ISAR, an authoring system for augmented reality tabletops targeting users from non-technical domains. ISAR allows non-technical users to create their own interactive tabletop applications and experiment with the use of this technology in domains such as educations, industrial training, and medical rehabilitation.Die Entwicklung von Augmented-Reality-Systemen ist mit mehreren Herausforderungen verbunden, die Endbenutzer und Experten aus nicht-technischen Bereichen, wie z.B. dem Bildungswesen, daran hindern, mit dieser Technologie zu experimentieren. In dieser Forschung stellen wir ISAR vor, ein Autorensystem fĂŒr Augmented-Reality-Tabletops, das sich an Benutzer aus nicht-technischen Bereichen richtet. ISAR ermöglicht es nicht-technischen Anwendern, ihre eigenen interaktiven Tabletop-Anwendungen zu erstellen und mit dem Einsatz dieser Technologie in Bereichen wie Bildung, industrieller Ausbildung und medizinischer Rehabilitation zu experimentieren

    Learning with tangible interfaces

    Get PDF
    Trabalho apresentado no ñmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia InformáticaTechnology is an active part of our lives and, without even noticing it, part of our daily activities became dependent on it. For that reason, software constructors began to pay special attention on people’s needs and interaction with both hardware and software they must deal with. Children are an emergent users’ group, as they are confronted with technology from an early stage of their development. Knowing that children see the world in a different way adults do and haven’t got yet the necessary dexterity to interact with some physical devices, special concerns arise. This happens especially if the application has an educational purpose, because they are more likely to need an extra motivation to use it than adults. Given that, a new subfield of Human-Computer Interaction appeared with special concerns related to children’s applications and how they interact with them: Child-Computer Interaction. When creating children’s technology the concept of ubiquity seems to rise almost naturally. The idea of children interacting with technology without even noticing it seems perfect. This may be achieved if the interactions are based on everyday objects and actions children are used to. The purpose of this thesis is to create a tool that enables children to build their own educational games, based on physical objects with which they usually interact. This idea follows a Learning-by-Teaching approach in which children are given the instructor’s role. Researchers have found that the best way to create children’s software is to let them take an active part on the construction process. Bearing that in mind three design sessions were conducted with children, based on the Bluebells Method, so they could give us the insight needed to create an intuitive application. Finally, usability tests were made to the created prototype in order not only to study its’ usability but also to understand if children’s motivation to create their own game engages them into learning more about the application’s subject

    Interactive spaces for children: gesture elicitation for controlling ground mini-robots

    Full text link
    [EN] Interactive spaces for education are emerging as a mechanism for fostering children's natural ways of learning by means of play and exploration in physical spaces. The advanced interactive modalities and devices for such environments need to be both motivating and intuitive for children. Among the wide variety of interactive mechanisms, robots have been a popular research topic in the context of educational tools due to their attractiveness for children. However, few studies have focused on how children would naturally interact and explore interactive environments with robots. While there is abundant research on full-body interaction and intuitive manipulation of robots by adults, no similar research has been done with children. This paper therefore describes a gesture elicitation study that identified the preferred gestures and body language communication used by children to control ground robots. The results of the elicitation study were used to define a gestural language that covers the different preferences of the gestures by age group and gender, with a good acceptance rate in the 6-12 age range. The study also revealed interactive spaces with robots using body gestures as motivating and promising scenarios for collaborative or remote learning activities.This work is funded by the European Development Regional Fund (EDRF-FEDER) and supported by the Spanish MINECO (TIN2014-60077-R). The work of Patricia Pons is supported by a national grant from the Spanish MECD (FPU13/03831). Special thanks are due to the children and teachers of the Col-legi Public Vicente Gaos for their valuable collaboration and dedication.Pons TomĂĄs, P.; JaĂ©n MartĂ­nez, FJ. (2020). Interactive spaces for children: gesture elicitation for controlling ground mini-robots. Journal of Ambient Intelligence and Humanized Computing. 11(6):2467-2488. https://doi.org/10.1007/s12652-019-01290-6S24672488116Alborzi H, Hammer J, Kruskal A et al (2000) Designing StoryRooms: interactive storytelling spaces for children. In: Proceedings of the conference on designing interactive systems processes, practices, methods, and techniques—DIS’00. ACM Press, New York, pp 95–104Antle AN, Corness G, Droumeva M (2009) What the body knows: exploring the benefits of embodied metaphors in hybrid physical digital environments. Interact Comput 21:66–75. https://doi.org/10.1016/j.intcom.2008.10.005Belpaeme T, Baxter PE, Read R et al (2013) Multimodal child–robot interaction: building social bonds. J Human-Robot Interact 1:33–53. https://doi.org/10.5898/JHRI.1.2.BelpaemeBenko H, Wilson AD, Zannier F, Benko H (2014) Dyadic projected spatial augmented reality. In: Proceedings of the 27th annual ACM symposium on user interface software and technology—UIST’14, pp 645–655Bobick AF, Intille SS, Davis JW et al (1999) The KidsRoom: a perceptually-based interactive and immersive story environment. Presence Teleoper Virtual Environ 8:367–391. https://doi.org/10.1162/105474699566297Bonarini A, Clasadonte F, Garzotto F, Gelsomini M (2015) Blending robots and full-body interaction with large screens for children with intellectual disability. In: Proceedings of the 14th international conference on interaction design and children—IDC’15. ACM Press, New York, pp 351–354Cauchard JR, E JL, Zhai KY, Landay JA (2015) Drone & me: an exploration into natural human–drone interaction. In: Proceedings of the 2015 ACM international joint conference on pervasive and ubiquitous computing—UbiComp’15. ACM Press, New York, pp 361–365Connell S, Kuo P-Y, Liu L, Piper AM (2013) A Wizard-of-Oz elicitation study examining child-defined gestures with a whole-body interface. In: Proceedings of the 12th international conference on interaction design and children—IDC’13. ACM Press, New York, pp 277–280Derboven J, Van Mechelen M, Slegers K (2015) Multimodal analysis in participatory design with children. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems—CHI’15. ACM Press, New York, pp 2825–2828Dong H, Danesh A, Figueroa N, El Saddik A (2015) An elicitation study on gesture preferences and memorability toward a practical hand-gesture vocabulary for smart televisions. IEEE Access 3:543–555. https://doi.org/10.1109/ACCESS.2015.2432679Druin A (1999) Cooperative inquiry: developing new technologies for children with children. In: Proceedings of the SIGCHI conference on human factors computer system CHI is limit—CHI’99, vol 14, pp 592–599. https://doi.org/10.1145/302979.303166Druin A (2002) The role of children in the design of new technology. Behav Inf Technol 21:1–25. https://doi.org/10.1080/01449290110108659Druin A, Bederson B, Boltman A et al (1999) Children as our technology design partners. In: Druin A (ed) The design of children’s technology. Morgan Kaufman, San Francisco, pp 51–72Epps J, Lichman S, Wu M (2006) A study of hand shape use in tabletop gesture interaction. CHI’06 extended abstracts on human factors in computing systems—CHI EA’06. ACM Press, New York, pp 748–753Fender AR, Benko H, Wilson A (2017) MeetAlive : room-scale omni-directional display system for multi-user content and control sharing. In: Proceedings of the 2017 ACM international conference on interactive surfaces and spaces, pp 106–115Fernandez RAS, Sanchez-Lopez JL, Sampedro C et al (2016) Natural user interfaces for human–drone multi-modal interaction. In: 2016 international conference on unmanned aircraft systems (ICUAS). IEEE, New York, pp 1013–1022Garcia-Sanjuan F, Jaen J, Nacher V, Catala A (2015) Design and evaluation of a tangible-mediated robot for kindergarten instruction. In: Proceedings of the 12th international conference on advances in computer entertainment technology—ACE’15. ACM Press, New York, pp 1–11Garcia-Sanjuan F, Jaen J, Jurdi S (2016) Towards encouraging communication in hospitalized children through multi-tablet activities. In: Proceedings of the XVII international conference on human computer interaction, pp 29.1–29.4Gindling J, Ioannidou A, Loh J et al (1995) LEGOsheets: a rule-based programming, simulation and manipulation environment for the LEGO programmable brick. In: Proceedings of symposium on visual languages. IEEE Computer Society Press, New York, pp 172–179Gonzalez B, Borland J, Geraghty K (2009) Whole body interaction for child-centered multimodal language learning. In: Proceedings of the 2nd workshop on child, computer and interaction—WOCCI’09. ACM Press, New York, pp 1–5GrĂžnbĂŠk K, Iversen OS, Kortbek KJ et al (2007) Interactive floor support for kinesthetic interaction in children learning environments. In: Human–computer interaction—INTERACT 2007. Lecture notes in computer science, pp 361–375Guha ML, Druin A, Chipman G et al (2005) Working with young children as technology design partners. Commun ACM 48:39–42. https://doi.org/10.1145/1039539.1039567Hansen JP, Alapetite A, MacKenzie IS, MĂžllenbach E (2014) The use of gaze to control drones. In: Proceedings of the symposium on eye tracking research and applications—ETRA’14. ACM Press, New York, pp 27–34Henkemans OAB, Bierman BPB, Janssen J et al (2017) Design and evaluation of a personal robot playing a self-management education game with children with diabetes type 1. Int J Hum Comput Stud 106:63–76. https://doi.org/10.1016/j.ijhcs.2017.06.001Horn MS, Crouser RJ, Bers MU (2011) Tangible interaction and learning: the case for a hybrid approach. Pers Ubiquitous Comput 16:379–389. https://doi.org/10.1007/s00779-011-0404-2Hourcade JP (2015) Child computer interaction. CreateSpace Independent Publishing Platform, North CharlestonHöysniemi J, HĂ€mĂ€lĂ€inen P, Turkki L (2004) Wizard of Oz prototyping of computer vision based action games for children. Proceeding of the 2004 conference on interaction design and children building a community—IDC’04. ACM Press, New York, pp 27–34Höysniemi J, HĂ€mĂ€lĂ€inen P, Turkki L, Rouvi T (2005) Children’s intuitive gestures in vision-based action games. Commun ACM 48:44–50. https://doi.org/10.1145/1039539.1039568Hsiao H-S, Chen J-C (2016) Using a gesture interactive game-based learning approach to improve preschool children’s learning performance and motor skills. Comput Educ 95:151–162. https://doi.org/10.1016/j.compedu.2016.01.005Jokela T, Rezaei PP, VÀÀnĂ€nen K (2016) Using elicitation studies to generate collocated interaction methods. In: Proceedings of the 18th international conference on human–computer interaction with mobile devices and services adjunct, pp 1129–1133. https://doi.org/10.1145/2957265.2962654Jones B, Benko H, Ofek E, Wilson AD (2013) IllumiRoom: peripheral projected illusions for interactive experiences. In: Proceedings of the SIGCHI conference on human factors in computing systems—CHI’13, pp 869–878Jones B, Shapira L, Sodhi R et al (2014) RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units. In: Proceedings of the 27th annual ACM symposium on user interface software and technology—UIST’14, pp 637–644Kaminski M, Pellino T, Wish J (2002) Play and pets: the physical and emotional impact of child-life and pet therapy on hospitalized children. Child Heal Care 31:321–335. https://doi.org/10.1207/S15326888CHC3104_5Karam M, Schraefel MC (2005) A taxonomy of gestures in human computer interactions. In: Technical report in electronics and computer science, pp 1–45Kistler F, AndrĂ© E (2013) User-defined body gestures for an interactive storytelling scenario. Lect Notes Comput Sci (including subser Lect Notes Artif Intell Lect Notes Bioinform) 8118:264–281. https://doi.org/10.1007/978-3-642-40480-1_17Konda KR, Königs A, Schulz H, Schulz D (2012) Real time interaction with mobile robots using hand gestures. In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction—HRI’12. ACM Press, New York, pp 177–178Kray C, Nesbitt D, Dawson J, Rohs M (2010) User-defined gestures for connecting mobile phones, public displays, and tabletops. In: Proceedings of the 12th international conference on human computer interaction with mobile devices and services—MobileHCI’10. ACM Press, New York, pp 239–248Kurdyukova E, Redlin M, AndrĂ© E (2012) Studying user-defined iPad gestures for interaction in multi-display environment. In: Proceedings of the 2012 ACM international conference on intelligent user interfaces—IUI’12. ACM Press, New York, pp 93–96Lambert V, Coad J, Hicks P, Glacken M (2014) Social spaces for young children in hospital. Child Care Health Dev 40:195–204. https://doi.org/10.1111/cch.12016Lee S-S, Chae J, Kim H et al (2013) Towards more natural digital content manipulation via user freehand gestural interaction in a living room. In: Proceedings of the 2013 ACM international joint conference on pervasive and ubiquitous computing—UbiComp’13. ACM Press, New York, p 617Malinverni L, Mora-Guiard J, Pares N (2016) Towards methods for evaluating and communicating participatory design: a multimodal approach. Int J Hum Comput Stud 94:53–63. https://doi.org/10.1016/j.ijhcs.2016.03.004Mann HB, Whitney DR (1947) On a test of whether one of two random variables is stochastically larger than the other. Ann Math Stat 18:50–60. https://doi.org/10.1214/aoms/1177730491Marco J, Cerezo E, Baldassarri S et al (2009) Bringing tabletop technologies to kindergarten children. In: Proceedings of the 23rd British HCI Group annual conference on people and computers: celebrating people and technology, pp 103–111Michaud F, Caron S (2002) Roball, the rolling robot. Auton Robots 12:211–222. https://doi.org/10.1023/A:1014005728519Micire M, Desai M, Courtemanche A et al (2009) Analysis of natural gestures for controlling robot teams on multi-touch tabletop surfaces. In: Proceedings of the ACM international conference on interactive tabletops and surfaces—ITS’09. ACM Press, New York, pp 41–48Mora-Guiard J, Crowell C, Pares N, Heaton P (2016) Lands of fog: helping children with autism in social interaction through a full-body interactive experience. In: Proceedings of the 15th international conference on interaction design and children—IDC’16. ACM Press, New York, pp 262–274Morris MR (2012) Web on the wall: insights from a multimodal interaction elicitation study. In: Proceedings of the 2012 ACM international conference on interactive tabletops and surfaces. ACM Press, New York, pp 95–104Morris MR, Wobbrock JO, Wilson AD (2010) Understanding users’ preferences for surface gestures. Proc Graph Interface 2010:261–268Nacher V, Garcia-Sanjuan F, Jaen J (2016) Evaluating the usability of a tangible-mediated robot for kindergarten children instruction. In: 2016 IEEE 16th international conference on advanced learning technologies (ICALT). IEEE, New York, pp 130–132Nahapetyan VE, Khachumov VM (2015) Gesture recognition in the problem of contactless control of an unmanned aerial vehicle. Optoelectron Instrum Data Process 51:192–197. https://doi.org/10.3103/S8756699015020132Obaid M, HĂ€ring M, Kistler F et al (2012) User-defined body gestures for navigational control of a humanoid robot. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), pp 367–377Obaid M, Kistler F, HĂ€ring M et al (2014) A framework for user-defined body gestures to control a humanoid robot. Int J Soc Robot 6:383–396. https://doi.org/10.1007/s12369-014-0233-3Obaid M, Kistler F, KasparavičiĆ«tė G, et al (2016) How would you gesture navigate a drone?: a user-centered approach to control a drone. In: Proceedings of the 20th international academic Mindtrek conference—AcademicMindtrek’16. ACM Press, New York, pp 113–121Pares N, Soler M, Sanjurjo À et al (2005) Promotion of creative activity in children with severe autism through visuals in an interactive multisensory environment. In: Proceeding of the 2005 conference on interaction design and children—IDC’05. ACM Press, New York, pp 110–116Pfeil K, Koh SL, LaViola J (2013) Exploring 3D gesture metaphors for interaction with unmanned aerial vehicles. In: Proceedings of the 2013 international conference on intelligent user interfaces—IUI’13, pp 257–266. https://doi.org/10.1145/2449396.2449429Piaget J (1956) The child’s conception of space. Norton, New YorkPiaget J (1973) The child and reality: problems of genetic psychology. Grossman, New YorkPiumsomboon T, Clark A, Billinghurst M, Cockburn A (2013) User-defined gestures for augmented reality. CHI’13 extended abstracts on human factors in computing systems—CHI EA’13. ACM Press, New York, pp 955–960Pons P, CarriĂłn A, Jaen J (2018) Remote interspecies interactions: improving humans and animals’ wellbeing through mobile playful spaces. Pervasive Mob Comput. https://doi.org/10.1016/j.pmcj.2018.12.003Puranam MB (2005) Towards full-body gesture analysis and recognition. University of Kentucky, LexingtonPyryeskin D, Hancock M, Hoey J (2012) Comparing elicited gestures to designer-created gestures for selection above a multitouch surface. In: Proceedings of the 2012 ACM international conference on interactive tabletops and surfaces—ITS’12. ACM Press, New York, pp 1–10Raffle HS, Parkes AJ, Ishii H (2004) Topobo: a constructive assembly system with kinetic memory. System 6:647–654. https://doi.org/10.1145/985692.985774Read JC, Markopoulos P (2013) Child–computer interaction. Int J Child-Comput Interact 1:2–6. https://doi.org/10.1016/j.ijcci.2012.09.001Read JC, Macfarlane S, Casey C (2002) Endurability, engagement and expectations: measuring children’s fun. In: Interaction design and children, pp 189–198Read JC, Markopoulos P, ParĂ©s N et al (2008) Child computer interaction. In: Proceeding of the 26th annual CHI conference extended abstracts on human factors in computing systems—CHI’08. ACM Press, New York, pp 2419–2422Robins B, Dautenhahn K (2014) Tactile interactions with a humanoid robot: novel play scenario implementations with children with autism. Int J Soc Robot 6:397–415. https://doi.org/10.1007/s12369-014-0228-0Robins B, Dautenhahn K, Te Boekhorst R, Nehaniv CL (2008) Behaviour delay and robot expressiveness in child–robot interactions: a user study on interaction kinesics. In: Proceedings of the 3rd ACMIEEE international conference on human robot interaction, pp 17–24. https://doi.org/10.1145/1349822.1349826Ruiz J, Li Y, Lank E (2011) User-defined motion gestures for mobile interaction. In: Proceedings of the 2011 annual conference on human factors in computing systems—CHI’11. ACM Press, New York, p 197Rust K, Malu M, Anthony L, Findlater L (2014) Understanding childdefined gestures and children’s mental models for touchscreen tabletop interaction. In: Proceedings of the 2014 conference on interaction design and children—IDC’14. ACM Press, New York, pp 201–204Salter T, Dautenhahn K, Te Boekhorst R (2006) Learning about natural human-robot interaction styles. Robot Auton Syst 54:127–134. https://doi.org/10.1016/j.robot.2005.09.022Sanghvi J, Castellano G, Leite I et al (2011) Automatic analysis of affective postures and body motion to detect engagement with a game companion. In: Proceedings of the 6th international conference on human–robot interaction—HRI’11. ACM Press, New York, pp 305–311Sanna A, Lamberti F, Paravati G, Manuri F (2013) A Kinect-based natural interface for quadrotor control. Entertain Comput 4:179–186. https://doi.org/10.1016/j.entcom.2013.01.001Sato E, Yamaguchi T, Harashima F (2007) Natural interface using pointing behavior for human–robot gestural interaction. IEEE Trans Ind Electron 54:1105–1112. https://doi.org/10.1109/TIE.2007.892728Schaper M-M, Pares N (2016) Making sense of body and space through full-body interaction design. In: Proceedings of the 15th international conference on interaction design and children—IDC’16. ACM Press, New York, pp 613–618Schaper M-M, Malinverni L, Pares N (2015) Sketching through the body: child-generated gestures in full-body interaction design. In: Proceedings of the 14th international conference on interaction design and children—IDC’15. ACM Press, New York, pp 255–258Seyed T, Burns C, Costa Sousa M et al (2012) Eliciting usable gestures for multi-display environments. In: Proceedings of the 2012 ACM international conference on interactive tabletops and surfaces—ITS’12. ACM Press, New York, p 41Shimon SSA, Morrison-Smith S, John N et al (2015) Exploring user-defined back-of-device gestures for mobile devices. In: Proceedings of the 17th international conference on human–computer interaction with mobile devices and services—MobileHCI’15. ACM Press, New York, pp 227–232Sipitakiat A, Nusen N (2012) Robo-blocks: a tangible programming system with debugging for children. In: Proceedings of the 11th international conference on interaction design and children—IDC’12. ACM Press, New York, p 98Soler-Adillon J, Ferrer J, Pares N (2009) A novel approach to interactive playgrounds: the interactive slide project. In: Proceedings of the 8th international conference on interaction design and children—IDC’09. ACM Press, New York, pp 131–139Stiefelhagen R, Fogen C, Gieselmann P et al (2004) Natural human–robot interaction using speech, head pose and gestures. In: 2004 IEEE/RSJ international conference on intelligent robots and systems (IROS) (IEEE Cat. No. 04CH37566). IEEE, New York, pp 2422–2427Subrahmanyam K, Greenfield PM (1994) Effect of video game practice on spatial skills in girls and boys. J Appl Dev Psychol 15:13–32. https://doi.org/10.1016/0193-3973(94)90004-3Sugiyama J, Tsetserukou D, Miura J (2011) NAVIgoid: robot navigation with haptic vision. In: SIGGRAPH Asia 2011 emerging technologies SA’11, vol 15, p 4503. https://doi.org/10.1145/2073370.2073378Takahashi T, Morita M, Tanaka F (2012) Evaluation of a tricycle-style teleoperational interface for children: a comparative experiment with a video game controller. In: 2012 IEEE RO-MAN: the 21st IEEE international symposium on robot and human interactive communication. IEEE, New York, pp 334–338Tanaka F, Takahashi T (2012) A tricycle-style teleoperational interface that remotely controls a robot for classroom children. In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction—HRI’12. ACM Press, New York, pp 255–256Tjaden L, Tong A, Henning P et al (2012) Children’s experiences of dialysis: a systematic review of qualitative studies. Arch Dis Child 97:395–402. https://doi.org/10.1136/archdischild-2011-300639Vatavu R-D (2012) User-defined gestures for free-hand TV control. In: Proceedings of the 10th European conference on interactive TV and video—EuroiTV’12. ACM Press, New York, pp 45–48Vatavu R-D (2017) Smart-Pockets: body-deictic gestures for fast access to personal data during ambient interactions. Int J Hum Comput Stud 103:1–21. https://doi.org/10.1016/j.ijhcs.2017.01.005Vatavu R-D, Wobbrock JO (2015) Formalizing agreement analysis for elicitation studies: new measures, significance test, and toolkit. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems—CHI’15. ACM Press, New York, pp 1325–1334Vatavu R-D, Wobbrock JO (2016) Between-subjects elicitation studies: formalization and tool support. In: Proceedings of the 2016 CHI conference on human factors in computing systems—CHI’16. ACM Press, New York, pp 3390–3402Voyer D, Voyer S, Bryden MP (1995) Magnitude of sex differences in spatial abilities: a meta-analysis and consideration of critical variables. Psychol Bull 117:250–270. https://doi.org/10.1037/0033-2909.117.2.250Wainer J, Robins B, Amirabdollahian F, Dautenhahn K (2014) Using the humanoid robot KASPAR to autonomously play triadic games and facilitate collaborative play among children with autism. IEEE Trans Auton Ment Dev 6:183–199. https://doi.org/10.1109/TAMD.2014.2303116Wang Y, Zhang L (2015) A track-based gesture recognition algorithm for Kinect. Appl Mech Mater 738–7399:334–338. https://doi.org/10.4028/www.scientific.net/AMM.738-739.334

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics
    • 

    corecore