2,737 research outputs found

    Mediated Communication and Customer Service Experiences: Psychological and Demographic Predictors of User Evaluations in the United States

    Get PDF
    People around the world who seek to interact with large organisations increasingly find they must do so via mediated and automated communication. Organisations often deploy both mediated and automated platforms, such as instant messaging and interactive voice response systems (IVRs), for efficiency and cost-savings. Customer and client responses to these systems range from delight to frustration. To better understand the factors affecting people's satisfaction with these systems, we conducted a representative U.S. national survey (N = 1321). We found that people overwhelmingly like and trust in-person customer service compared to mediated and automated modalities. As to demographic attitude predictors, age was important (older respondents liked mediated systems less), but income and education were not strong attitude predictors. For personality variables, innovativeness was positively associated with mediated system satisfaction. However, communication apprehensiveness, which we expected to be related to satisfaction, was not. We conclude by discussing implications for the burgeoning field of human-machine communication, as well as social policy, equity, and the pullulating digital services divide

    Accessibility requirements for human-robot interaction for socially assistive robots

    Get PDF
    Mención Internacional en el título de doctorPrograma de Doctorado en Ciencia y Tecnología Informåtica por la Universidad Carlos III de MadridPresidente: María Ángeles Malfaz Våzquez.- Secretario: Diego Martín de Andrés.- Vocal: Mike Wal

    Indoor navigation for the visually impaired : enhancements through utilisation of the Internet of Things and deep learning

    Get PDF
    Wayfinding and navigation are essential aspects of independent living that heavily rely on the sense of vision. Walking in a complex building requires knowing exact location to find a suitable path to the desired destination, avoiding obstacles and monitoring orientation and movement along the route. People who do not have access to sight-dependent information, such as that provided by signage, maps and environmental cues, can encounter challenges in achieving these tasks independently. They can rely on assistance from others or maintain their independence by using assistive technologies and the resources provided by smart environments. Several solutions have adapted technological innovations to combat navigation in an indoor environment over the last few years. However, there remains a significant lack of a complete solution to aid the navigation requirements of visually impaired (VI) people. The use of a single technology cannot provide a solution to fulfil all the navigation difficulties faced. A hybrid solution using Internet of Things (IoT) devices and deep learning techniques to discern the patterns of an indoor environment may help VI people gain confidence to travel independently. This thesis aims to improve the independence and enhance the journey of VI people in an indoor setting with the proposed framework, using a smartphone. The thesis proposes a novel framework, Indoor-Nav, to provide a VI-friendly path to avoid obstacles and predict the user s position. The components include Ortho-PATH, Blue Dot for VI People (BVIP), and a deep learning-based indoor positioning model. The work establishes a novel collision-free pathfinding algorithm, Orth-PATH, to generate a VI-friendly path via sensing a grid-based indoor space. Further, to ensure correct movement, with the use of beacons and a smartphone, BVIP monitors the movements and relative position of the moving user. In dark areas without external devices, the research tests the feasibility of using sensory information from a smartphone with a pre-trained regression-based deep learning model to predict the user s absolute position. The work accomplishes a diverse range of simulations and experiments to confirm the performance and effectiveness of the proposed framework and its components. The results show that Indoor-Nav is the first type of pathfinding algorithm to provide a novel path to reflect the needs of VI people. The approach designs a path alongside walls, avoiding obstacles, and this research benchmarks the approach with other popular pathfinding algorithms. Further, this research develops a smartphone-based application to test the trajectories of a moving user in an indoor environment

    Formation Control of Multiple Autonomous Mobile Robots Using Turkish Natural Language Processing

    Get PDF
    People use natural language to express their thoughts and wishes. As robots reside in various human environments, such as homes, offices, and hospitals, the need for human–robot communication is increasing. One of the best ways to achieve this communication is the use of natural languages. Natural language processing (NLP) is the most important approach enabling robots to understand natural languages and improve human–robot interaction. Also, due to this need, the amount of research on NLP has increased considerably in recent years. In this study, commands were given to a multiple-mobile-robot system using the Turkish natural language, and the robots were required to fulfill these orders. Turkish is classified as an agglutinative language. In agglutinative languages, words combine different morphemes, each carrying a specific meaning, to create complex words. Turkish exhibits this characteristic by adding various suffixes to a root or base form to convey grammatical relationships, tense, aspect, mood, and other semantic nuances. Since the Turkish language has an agglutinative structure, it is very difficult to decode its sentence structure in a way that robots can understand. Parsing of a given command, path planning, path tracking, and formation control were carried out. In the path-planning phase, the A* algorithm was used to find the optimal path, and a PID controller was used to follow the generated path with minimum error. A leader–follower approach was used to control multiple robots. A platoon formation was chosen as the multi-robot formation. The proposed method was validated on a known map containing obstacles, demonstrating the system’s ability to navigate the robots to the desired locations while maintaining the specified formation. This study used Turtlebot3 robots within the Gazebo simulation environment, providing a controlled and replicable setting for comprehensive experimentation. The results affirm the feasibility and effectiveness of employing NLP techniques for the formation control of multiple mobile robots, offering a robust and effective method for further research and development on human–robot interaction

    Mediated communication and customer service experiences

    Get PDF
    People around the world who seek to interact with large organisations increasingly find they must do so via mediated and automated communication. Organisations often deploy both mediated and automated platforms, such as instant messaging and interactive voice response systems (IVRs), for efficiency and cost-savings. Customer and client responses to these systems range from delight to frustration. To better understand the factors affecting people's satisfaction with these systems, we conducted a representative U.S. national survey (N = 1321). We found that people overwhelmingly like and trust in-person customer service compared to mediated and automated modalities. As to demographic attitude predictors, age was important (older respondents liked mediated systems less), but income and education were not strong attitude predictors. For personality variables, innovativeness was positively associated with mediated system satisfaction. However, communication apprehensiveness, which we expected to be related to satisfaction, was not. We conclude by discussing implications for the burgeoning field of human-machine communication, as well as social policy, equity, and the pullulating digital services divide.Published versio

    Assistive Navigation Using Deep Reinforcement Learning Guiding Robot With UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People

    Get PDF
    Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)–based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed

    A 360 VR and Wi-Fi Tracking Based Autonomous Telepresence Robot for Virtual Tour

    Get PDF
    This study proposes a novel mobile robot teleoperation interface that demonstrates the applicability of a robot-aided remote telepresence system with a virtual reality (VR) device to a virtual tour scenario. To improve realism and provide an intuitive replica of the remote environment for the user interface, the implemented system automatically moves a mobile robot (viewpoint) while displaying a 360-degree live video streamed from the robot to a VR device (Oculus Rift). Upon the user choosing a destination location from a given set of options, the robot generates a route based on a shortest path graph and travels along that the route using a wireless signal tracking method that depends on measuring the direction of arrival (DOA) of radio signals. This paper presents an overview of the system and architecture, and discusses its implementation aspects. Experimental results show that the proposed system is able to move to the destination stably using the signal tracking method, and that at the same time, the user can remotely control the robot through the VR interface
    • 

    corecore