288 research outputs found

    Collaborative information sensemaking for multi-robot search and rescue

    Get PDF
    In this paper, we consider novel information sensemaking methods for search and rescue operations that combine principles of information fusion and collective intelligence in scalable solutions. We will elaborate on several approaches that originated in different areas of information integration, sensor data management, and multi-robot urban search and rescue missions

    Emotional Attachment, Performance, and Viability in Teams Collaborating with Embodied Physical Action (EPA) Robots

    Get PDF
    Although different types of teams increasingly employ embodied physical action (EPA) robots as a collaborative technology to accomplish their work, we know very little about what makes such teams successful. This paper has two objectives: the first is to examine whether a team’s emotional attachment to its robots can lead to better team performance and viability; the second is to determine whether robot and team identification can promote a team’s emotional attachment to its robots. To achieve these objectives, we conducted a between-subjects experiment with 57 teams working with robots. Teams performed better and were more viable when they were emotionally attached to their robots. Both robot and team identification increased a team’s emotional attachment to its robots. Results of this study have implications for collaboration using EPA robots specifically and for collaboration technology in general

    Proceedings of the 2nd Workshop on Mobile Resilience: Designing Interactive Systems for Crisis Response

    Get PDF
    Information and communication technologies (ICT), including artificial intelligence, internet of things, and mobile applications can be utilized to tackle important societal challenges, such as the ongoing COVID-19 pandemic. While they may increase societal resilience, their design, functionality, and underlying infrastructures must be resilient against disruptions caused by anthropogenic, natural and hybrid crises, emergencies, and threats. In order to research challenges, designs, and potentials of interactive technologies, this workshop investigated the space of mobile technologies and resilient systems for crisis response, including the application domains of cyber threat and pandemic response

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    IIMA 2018 Proceedings

    Get PDF

    Emotional Attachment, Performance, and Viability in Teams Collaborating with Embodied Physical Action (EPA) Robots

    Full text link
    Although teams are increasingly employing embodied physical action (EPA) robots as a collaborative technology to accomplish their work, we know very little about what makes such teams successful. This paper has two objectives: the first is to examine whether a team’s emotional attachment to its robots can lead to better team performance and viability; the second is to determine whether robot and team identification can promote a team’s emotional attachment to its robots. To achieve these objectives, we conducted a between-subjects experiment with 57 teams working with robots. Teams performed better and were more viable when they were emotionally attached to their robots. Both robot identification and team identification increased a team’s emotional attachment to its robots. Results of this study have implications for collaboration using EPA robots specifically and for collaboration technology in general.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136918/1/EmAtt_JAIS_Accepted Paper June 11 2017.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/136918/4/You and Robert 2018.pdfDescription of EmAtt_JAIS_Accepted Paper June 11 2017.pdf : Preprint Main ArticleDescription of You and Robert 2018.pdf : Published Versio

    Context-Enabled Visualization Strategies for Automation Enabled Human-in-the-loop Inspection Systems to Enhance the Situation Awareness of Windstorm Risk Engineers

    Get PDF
    Insurance loss prevention survey, specifically windstorm risk inspection survey is the process of investigating potential damages associated with a building or structure in the event of an extreme weather condition such as a hurricane or tornado. Traditionally, the risk inspection process is highly subjective and depends on the skills of the engineer performing it. This dissertation investigates the sensemaking process of risk engineers while performing risk inspection with special focus on various factors influencing it. This research then investigates how context-based visualizations strategies enhance the situation awareness and performance of windstorm risk engineers. An initial study investigated the sensemaking process and situation awareness requirements of the windstorm risk engineers. The data frame theory of sensemaking was used as the framework to carry out this study. Ten windstorm risk engineers were interviewed, and the data collected were analyzed following an inductive thematic approach. The themes emerged from the data explained the sensemaking process of risk engineers, the process of making sense of contradicting information, importance of their experience level, internal and external biases influencing the inspection process, difficulty developing mental models, and potential technology interventions. More recently human in the loop systems such as drones have been used to improve the efficiency of windstorm risk inspection. This study provides recommendations to guide the design of such systems to support the sensemaking process and situation awareness of windstorm visual risk inspection. The second study investigated the effect of context-based visualization strategies to enhance the situation awareness of the windstorm risk engineers. More specifically, the study investigated how different types of information contribute towards the three levels of situation awareness. Following a between subjects study design 65 civil/construction engineering students completed this study. A checklist based and predictive display based decision aids were tested and found to be effective in supporting the situation awareness requirements as well as performance of windstorm risk engineers. However, the predictive display only helped with certain tasks like understanding the interaction among different components on the rooftop. For remaining tasks, checklist alone was sufficient. Moreover, the decision aids did not place any additional cognitive demand on the participants. This study helped us understand the advantages and disadvantages of the decision aids tested. The final study evaluated the transfer of training effect of the checklist and predictive display based decision aids. After one week of the previous study, participants completed a follow-up study without any decision aids. The performance and situation awareness of participants in the checklist and predictive display group did not change significantly from first trial to second trial. However, the performance and situation awareness of participants in the control condition improved significantly in the second trial. They attributed this to their exposure to SAGAT questionnaire in the first study. They knew what issues to look for and what tasks need to be completed in the simulation. The confounding effect of SAGAT questionnaires needs to be studied in future research efforts

    Developing a depth-based tracking systems for interactive playful environments with animals

    Full text link
    © ACM 2015. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM. Proceedings of the 12th International Conference on Advances in Computer Entertainment Technology (p. 59). http://dx.doi.org/10.1145/2832932.2837007.[EN] Digital games for animals within Animal Computer Interaction are usually single-device oriented, however richer interactions could be delivered by considering multimodal environments and expanding the number of technological elements involved. In these playful ecosystems, animals could be either alone or accompanied by human beings, but in both cases the system should react properly to the interactions of all the players, creating more engaging and natural games. Technologically-mediated playful scenarios for animals will therefore require contextual information about the game participants, such as their location or body posture, in order to suitably adapt the system reactions. This paper presents a depth-based tracking system for cats capable of detecting their location, body posture and field of view. The proposed system could also be extended to locate and detect human gestures and track small robots, becoming a promising component in the creation of intelligent interspecies playful environments.Work supported by the Spanish Ministry of Economy and Competitiveness and funded by the EDRF-FEDER (TIN2014-60077-R). The work of Patricia Pons has been supported by a national grant from the Spanish MECD (FPU13/03831). Alejandro Catalá also received support from a VALi+d fellowship from the GVA (APOSTD/2013/013). Special thanks to our cat participants, their owners, and our feline caretakers and therapistsPons Tomás, P.; Jaén Martínez, FJ.; Catalá Bolós, A. (2015). Developing a depth-based tracking systems for interactive playful environments with animals. ACM. https://doi.org/10.1145/2832932.2837007SJan Bednarik and David Herman. 2015. Human gesture recognition using top view depth data obtained from Kinect sensor.Excel. - Student Conf. Innov. Technol. Sci. IT, 1--8.Hrvoje Benko, Andrew D. Wilson, Federico Zannier, and Hrvoje Benko. 2014. Dyadic projected spatial augmented reality.Proc. 27th Annu. ACM Symp. User interface Softw. Technol. - UIST '14, 645--655.Alper Bozkurt, David L Roberts, Barbara L Sherman, et al. 2014. Toward Cyber-Enhanced Working Dogs for Search and Rescue.IEEE Intell. Syst. 29, 6, 32--39.Rita Brugarolas, Robert T. Loftin, Pu Yang, David L. Roberts, Barbara Sherman, and Alper Bozkurt. 2013. Behavior recognition based on machine learning algorithms for a wireless canine machine interface.2013 IEEE Int. Conf. Body Sens. Networks, 1--5.Adrian David Cheok, Roger Thomas K C Tan, R. L. Peiris, et al. 2011. Metazoa Ludens: Mixed-Reality Interaction and Play for Small Pets and Humans.IEEE Trans. Syst. Man, Cybern. - Part A Syst. Humans41, 5, 876--891.Amanda Hodgson, Natalie Kelly, and David Peel. 2013. Unmanned aerial vehicles (UAVs) for surveying Marine Fauna: A dugong case study.PLoS One8, 11, 1--15.Gang Hu, Derek Reilly, Mohammed Alnusayri, Ben Swinden, and Qigang Gao. 2014. DT-DT: Top-down Human Activity Analysis for Interactive Surface Applications.Proc. Ninth ACM Int. Conf. Interact. Tabletops Surfaces - ITS '14, 167--176.Brett R Jones, Hrvoje Benko, Eyal Ofek, and Andrew D. Wilson. 2013. IllumiRoom: Peripheral Projected Illusions for Interactive Experiences.Proc. SIGCHI Conf. Hum. Factors Comput. Syst. - CHI '13, 869--878.Brett Jones, Lior Shapira, Rajinder Sodhi, et al. 2014. RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units.Proc. 27th Annu. ACM Symp. User Interface Softw. Technol. - UIST '14, 637--644.Cassim Ladha, Nils Hammerla, Emma Hughes, Patrick Olivier, and Thomas Ploetz. 2013. Dog's Life: Wearable Activity Recognition for Dogs.Proc. 2013 ACM Int. Jt. Conf. Pervasive Ubiquitous Comput. - UbiComp'13, 415.Shang Ping Lee, Adrian David Cheok, Teh Keng Soon James, et al. 2006. A mobile pet wearable computer and mixed reality system for human--poultry interaction through the internet.Pers. Ubiquitous Comput. 10, 5, 301--317.Clara Mancini, Janet van der Linden, Jon Bryan, and Andrew Stuart. 2012. Exploring interspecies sensemaking: Dog Tracking Semiotics and Multispecies Ethnography.Proc. 2012 ACM Conf. Ubiquitous Comput. - UbiComp '12, 143--152.Clara Mancini. 2011. Animal-computer interaction: a manifesto.Mag. Interact. 18, 4, 69--73.Clara Mancini. 2013. Animal-computer interaction (ACI): changing perspective on HCI, participation and sustainability.CHI '13 Ext. Abstr. Hum. Factors Comput. Syst., 2227--2236.Steve North, Carol Hall, Amanda Roshier, and Clara Mancini. 2015. HABIT: Horse Automated Behaviour Identification Tool -- A Position Paper.Proc. Br. Hum. Comput. Interact. Conf. - Anim. Comput. Interact. Work., 1--4.Mikko Paldanius, Tuula Kärkkäinen, Kaisa Väänänen-Vainio-Mattila, Oskar Juhlin, and Jonna Häkkilä. 2011. Communication technology for human-dog interaction: exploration of dog owners' experiences and expectations.Proc. SIGCHI Conf. Hum. Factors Comput. Syst., 2641--2650.Patricia Pons, Javier Jaen, and Alejandro Catala. Multimodality and Interest Grabbing: Are Cats Ready for the Game?Submitt. to Int. J. Human-Computer Stud. Spec. Issue Anim. Comput. Interact. (under Rev).Patricia Pons, Javier Jaen, and Alejandro Catala. 2014. Animal Ludens: Building Intelligent Playful Environments for Animals.Proc. 2014 Work. Adv. Comput. Entertain. Conf. - ACE '14 Work., 1--6.Patricia Pons, Javier Jaen, and Alejandro Catala. 2015. Envisioning Future Playful Interactive Environments for Animals. InMore Playful User Interfaces, Anton Nijholt (ed.). Springer, 121--150.Rui Trindade, Micaela Sousa, Cristina Hart, Nádia Vieira, Roberto Rodrigues, and João França. 2015. Purrfect Crime.Proc. 33rd Annu. ACM Conf. Ext. Abstr. Hum. Factors Comput. Syst. - CHI EA '15, 93--96.Jessica van Vonderen. 2015. Drones with heat-tracking cameras used to monitor koala population. Retrieved July 1, 2015 from http://www.abc.net.au/news/2015-02-24/drones-to-help-threatened-species-koalas-qut/6256558Alexandra Weilenmann and Oskar Juhlin. 2011. Understanding people and animals: the use of a positioning system in ordinary human-canine interaction.Proc. 2011 Annu. Conf. Hum. factors Comput. Syst. - CHI '11, 2631--2640.Chadwick A. Wingrave, J. Rose, Todd Langston, and Joseph J. Jr. LaViola. 2010. Early explorations of CAT: canine amusement and training.CHI '10 Ext. Abstr. Hum. Factors Comput. Syst., 2661--2669.Kyoko Yonezawa, Takashi Miyaki, and Jun Rekimoto. 2009. Cat@Log: sensing device attachable to pet cats for supporting human-pet interaction.Proc. Int. Conf. Adv. Comput. Enterntainment Technol. - ACE '09, 149--156.2013. ZOO Boomer balls. Retrieved July 1, 2015 from https://www.youtube.com/watch?v=Od_Lm8U5W4

    Distributed Dynamic Hierarchical Task Assignment for Human-Robot Teams

    Get PDF
    This work implements a joint task architecture for human-robot collaborative task execution using a hierarchical task planner. This architecture allowed humans and robots to work together as teammates in the same environment while following several task constraints. These constraints are 1) sequential order, 2) non-sequential, and 3) alternative execution constraints. Both the robot and the human are aware of each other's current state and allocate their next task based on the task tree. On-table tasks, such as setting up a tea table or playing a color sequence matching game, validate the task architecture. The robot will have an updated task representation of its human teammate's task. Using this knowledge, it is also able to continuously detect the human teammate's intention towards each sub-task and coordinate it with the teammate. While performing a joint task, there can be situations in which tasks overlap or do not overlap. We designed a dialogue-based conversation between humans and robots to resolve conflict in the case of overlapping tasks.Evaluating the human-robot task architecture is the next concern after validating the task architecture. Trust and trustworthiness are some of the most critical metrics to explore. A study was conducted between humans and robots to create a homophily situation. Homophily means when a person feels biased towards another person because of having similarities in social ways. We conducted this study to determine whether humans can form a homophilic relationship with robots and whether there is a connection between homophily and trust. We found a correlation between homophily and trust in human-robot interactions.Furthermore, we designed a pipeline by which the robot learns a task by observing the human teammate's hand movement while conversing. The robot then constructs the tree by itself using a GA learning framework. Thus removing the need for manual specification by a programmer each time to revise or update the task tree which makes the architecture more flexible, realistic, efficient, and dynamic. Additionally, our architecture allows the robot to comprehend the context of a situation by conversing with a human teammate and observing the surroundings. The robot can find a link between the context of the situation and the surrounding objects by using the ontology approach and can perform the desired task accordingly. Therefore, we proposed a human-robot distributed joint task management architecture that addresses design, improvement, and evaluation under multiple constraints
    • …
    corecore