408 research outputs found

    Shifting the Focus: The Role of Presence in Reconceptualising the Design Process

    Get PDF
    In this paper the relationship between presence and imaging is examined with the view to establish how our understanding of imaging, and subsequently the design process, may be reconceptualised to give greater focus to its experiential potential. First, the paper outlines the research project contributing to the discussion. Then, it provides brief overviews of research on both imaging and presence in the process highlighting the narrow conceptions of imaging (and the recognition of the need for further research) compared to the more holistic and experiential understandings of presence. The paper concludes with an argument and proposed study for exploring the role of digital technology and presence in extending the potential of imaging and its role in the design process. As indicated in the DRS Conference Theme, this paper focuses “…on what people experience and the systems and actions that create those experiences.” Interface designers, information architects and interactive media artists understand the powerful influence of experience in design. ‘Experience design’ is a community of practice driven by individuals within digital based disciplines where the belief is that understanding people is essential to any successful design in any medium and that “…experience is the personal connection with the moment and… every aspect of living is an experience, whether we are the creators or simply chance participants” (Shedroff, 2001, p. 5). Keywords: Design, Design Process, Presence, Imaging, Grounded Theory</p

    XSkill: Cross Embodiment Skill Discovery

    Full text link
    Human demonstration videos are a widely available data source for robot learning and an intuitive user interface for expressing desired behavior. However, directly extracting reusable robot manipulation skills from unstructured human videos is challenging due to the big embodiment difference and unobserved action parameters. To bridge this embodiment gap, this paper introduces XSkill, an imitation learning framework that 1) discovers a cross-embodiment representation called skill prototypes purely from unlabeled human and robot manipulation videos, 2) transfers the skill representation to robot actions using conditional diffusion policy, and finally, 3) composes the learned skill to accomplish unseen tasks specified by a human prompt video. Our experiments in simulation and real-world environments show that the discovered skill prototypes facilitate both skill transfer and composition for unseen tasks, resulting in a more general and scalable imitation learning framework. The benchmark, code, and qualitative results are on https://xskill.cs.columbia.edu

    Telerobot control system

    Get PDF
    This invention relates to an operator interface for controlling a telerobot to perform tasks in a poorly modeled environment and/or within unplanned scenarios. The telerobot control system includes a remote robot manipulator linked to an operator interface. The operator interface includes a setup terminal, simulation terminal, and execution terminal for the control of the graphics simulator and local robot actuator as well as the remote robot actuator. These terminals may be combined in a single terminal. Complex tasks are developed from sequential combinations of parameterized task primitives and recorded teleoperations, and are tested by execution on a graphics simulator and/or local robot actuator, together with adjustable time delays. The novel features of this invention include the shared and supervisory control of the remote robot manipulator via operator interface by pretested complex tasks sequences based on sequences of parameterized task primitives combined with further teleoperation and run-time binding of parameters based on task context

    Virtual reality for safe testing and development in collaborative robotics: challenges and perspectives

    Get PDF
    Collaborative robots (cobots) could help humans in tasks that are mundane, dangerous or where direct human contact carries risk. Yet, the collaboration between humans and robots is severely limited by the aspects of the safety and comfort of human operators. In this paper, we outline the use of extended reality (XR) as a way to test and develop collaboration with robots. We focus on virtual reality (VR) in simulating collaboration scenarios and the use of cobot digital twins. This is specifically useful in situations that are difficult or even impossible to safely test in real life, such as dangerous scenarios. We describe using XR simulations as a means to evaluate collaboration with robots without putting humans at harm. We show how an XR setting enables combining human behavioral data, subjective self-reports, and biosignals signifying human comfort, stress and cognitive load during collaboration. Several works demonstrate XR can be used to train human operators and provide them with augmented reality (AR) interfaces to enhance their performance with robots. We also provide a first attempt at what could become the basis for a human–robot collaboration testing framework, specifically for designing and testing factors affecting human–robot collaboration. The use of XR has the potential to change the way we design and test cobots, and train cobot operators, in a range of applications: from industry, through healthcare, to space operations.info:eu-repo/semantics/publishedVersio

    Brain Functional Connectivity under Teleoperation Latency: a fNIRS Study

    Full text link
    Objective: This study aims to understand the cognitive impact of latency in teleoperation and the related mitigation methods, using functional Near-Infrared Spectroscopy (fNIRS) to analyze functional connectivity. Background: Latency between command, execution, and feedback in teleoperation can impair performance and affect operators mental state. The neural underpinnings of these effects are not well understood. Method: A human subject experiment (n = 41) of a simulated remote robot manipulation task was performed. Three conditions were tested: no latency, with visual and haptic latency, with visual latency and no haptic latency. fNIRS and performance data were recorded and analyzed. Results: The presence of latency in teleoperation significantly increased functional connectivity within and between prefrontal and motor cortexes. Maintaining visual latency while providing real-time haptic feedback reduced the average functional connectivity in all cortical networks and showed a significantly different connectivity ratio within prefrontal and motor cortical networks. The performance results showed the worst performance in the all-delayed condition and best performance in no latency condition, which echoes the neural activity patterns. Conclusion: The study provides neurological evidence that latency in teleoperation increases cognitive load, anxiety, and challenges in motion planning and control. Real-time haptic feedback, however, positively influences neural pathways related to cognition, decision-making, and sensorimotor processes. Application: This research can inform the design of ergonomic teleoperation systems that mitigate the effects of latency.Comment: Submitted to Human Factor

    Method and associated apparatus for capturing, servicing, and de-orbiting earth satellites using robotics

    Get PDF
    This invention is a method and supporting apparatus for autonomously capturing, servicing and de-orbiting a free-flying spacecraft, such as a satellite, using robotics. The capture of the spacecraft includes the steps of optically seeking and ranging the satellite using LIDAR; and matching tumble rates, rendezvousing and berthing with the satellite. Servicing of the spacecraft may be done using supervised autonomy, which is allowing a robot to execute a sequence of instructions without intervention from a remote human-occupied location. These instructions may be packaged at the remote station in a script and uplinked to the robot for execution upon remote command giving authority to proceed. Alternately, the instructions may be generated by Artificial Intelligence (AI) logic onboard the robot. In either case, the remote operator maintains the ability to abort an instruction or script at any time, as well as the ability to intervene using manual override to teleoperate the robot.In one embodiment, a vehicle used for carrying out the method of this invention comprises an ejection module, which includes the robot, and a de-orbit module. Once servicing is completed by the robot, the ejection module separates from the de-orbit module, leaving the de-orbit module attached to the satellite for de-orbiting the same at a future time. Upon separation, the ejection module can either de-orbit itself or rendezvous with another satellite for servicing. The ability to de-orbit a spacecraft further allows the opportunity to direct the landing of the spent satellite in a safe location away from population centers, such as the ocean

    Evaluation of Using Semi-Autonomy Features in Mobile Robotic Telepresence Systems

    Get PDF
    Mobile robotic telepresence systems used for social interaction scenarios require that users steer robots in a remote environment. As a consequence, a heavy workload can be put on users if they are unfamiliar with using robotic telepresence units. One way to lessen this workload is to automate certain operations performed during a telepresence session in order to assist remote drivers in navigating the robot in new environments. Such operations include autonomous robot localization and navigation to certain points in the home and automatic docking of the robot to the charging station. In this paper we describe the implementation of such autonomous features along with user evaluation study. The evaluation scenario is focused on the first experience on using the system by novice users. Importantly, that the scenario taken in this study assumed that participants have as little as possible prior information about the system. Four different use-cases were identified from the user behaviour analysis.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Plan Nacional de Investigación, proyecto DPI2011-25483
    corecore