89 research outputs found

    Collaborating with Autonomous Agents

    Get PDF
    With the anticipated increase of small unmanned aircraft systems (sUAS) entering into the National Airspace System, it is highly likely that vehicle operators will be teaming with fleets of small autonomous vehicles. The small vehicles may consist of sUAS, which are 55 pounds or less that typically will y at altitudes 400 feet and below, and small ground vehicles typically operating in buildings or defined small campuses. Typically, the vehicle operators are not concerned with manual control of the vehicle; instead they are concerned with the overall mission. In order for this vision of high-level mission operators working with fleets of vehicles to come to fruition, many human factors related challenges must be investigated and solved. First, the interface between the human operator and the autonomous agent must be at a level that the operator needs and the agents can understand. This paper details the natural language human factors e orts that NASA Langley's Autonomy Incubator is focusing on. In particular these e orts focus on allowing the operator to interact with the system using speech and gestures rather than a mouse and keyboard. With this ability of the system to understand both speech and gestures, operators not familiar with the vehicle dynamics will be able to easily plan, initiate, and change missions using a language familiar to them rather than having to learn and converse in the vehicle's language. This will foster better teaming between the operator and the autonomous agent which will help lower workload, increase situation awareness, and improve performance of the system as a whole

    Perception, learning and use of tool affordances on humanoid robots

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013.Thesis (Master's) -- Bilkent University, 2013.Includes bibliographical references leaves 89-91.Humans and some animals use di erent tools for di erent aims such as extending reach, amplifying mechanical force, create or augment signal value of social display, camou age, bodily comfort and e ective control of uids. In robotics, tools are mostly used for extending the reach area of a robot. For this aim, the question \What kind of tool is better in which situation?" is very signi cant. The importance of a ordance concept rises with this question. That is because, di erent tools a ord variety of capabilities depending on target objects. Towards the aim of learning tool a ordances, robots should experience e ects by applying behaviors on di erent objects. In this study, our goal is to teach the humanoid robot iCub, the a ordances of tools by applying di erent behaviors on a variety of objects and observing the e ects of these interactions. Using eye camera and Kinect, tool and object features are obtained for each interaction to construct the training data. Success of a behavior depends on the tool features, object position and properties and also the hand that the robot uses the tool with. As a result of the training of each behavior, the robot successfully predicts e ects of di erent behaviors and infers the a ordances when a tool is given and an object is shown. When an a ordance is requested, the robot can apply the appropriate behavior given a tool and an object, the robot can select the best tool among di erent tools when a speci c a ordance is requested and an object is shown. This study also demonstrates how di erent positions and properties of objects a ect the a ordance and behavior results, and how a ordance and behavior results are a ected when a part of a tool is removed, modi ed or a new part is added.Çalışkan, YiğitM.S

    How to build a supervised autonomous system for robot-enhanced therapy for children with autism spectrum disorder

    Get PDF
    Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy

    Human-Robot Trust Assessment From Physical Apprehension Signals

    Get PDF

    Abstracts 2015: Highlights of Student Research and Creative Endeavors

    Get PDF
    https://csuepress.columbusstate.edu/abstracts/1007/thumbnail.jp

    The Turning, Stretching and Boxing Technique: a Direction Worth Looking Towards

    Get PDF
    3D avatar user interfaces (UI) are now used for many applications, a growing area for their use is serving location sensitive information to users as they need it while visiting or touring a building. Users communicate directly with an avatar rendered to a display in order to ask a question, get directions or partake in a guided tour and as a result of this kind of interaction with avatar UI, they have become a familiar part of modern human-computer interaction (HCI). However, if the viewer is not in the sweet spot (defined by Raskar et al. (1999) as a stationary viewing position at the optimal 90° angle to a 2D display) of the 2D display, the 3D illusion of the avatar deteriorates, which becomes evident as the user’s ability to interpret the avatar’s gaze direction towards points of interests (PoI) in the user’s real-world surroundings deteriorates also. This thesis combats the above problem by allowing the user to view the 3D avatar UI from outside the sweet spot, without any deterioration in the 3D illusion. The user does not lose their ability to interpret the avatar’s gaze direction and thus, the user experiences no loss in the perceived corporeal presence (Holz et al., 2011) for the avatar. This is facilitated by a three pronged graphical process called the Turning, Stretching and Boxing (TSB) technique, which maintains the avatar’s 3D illusion regardless of the user’s viewing angle and is achieved by using head-tracking data from the user captured by a Microsoft Kinect. The TSB technique is a contribution of this thesis because of how it is used with an avatar UI, where the user is free to move outside of the sweet spot without losing the 3D illusion of the rendered avatar. Then each consecutive empirical study evaluates the claims of the TSB Technique are also contributions of this thesis, those claims are as follows: (1) increase interpretability of the avatar’s gaze direction and (2) increase perception of corporeal presence for the avatar. The last of the empirical studies evaluates the use of 3D display technology in conjunction with the TSB technique. The results of Study 1 and Study 2 indicate that there is a significant increase in the participants’ abilities to interpret the avatar’s gaze direction when the TSB technique is switched on. The survey from Study 1 shows a significant increase in the perceived corporeal presence of the avatar when the TSB technique is switched on. The results from Study 3 indicate that there is no significant benefit for participants’ when interpreting the avatar’s gaze direction with 3D display technology turned on or off when the TSB technique is switched on
    • …
    corecore