3,370 research outputs found

    Abstract Visual Programming of Social Robots for Novice Users

    Get PDF
    Human Robot Interaction is a rapidly growing field that blends multiple disciplines with highly varied research methods. To better leverage experts in non-engineering (software/hardware) fields and increase engagement with the wider public, development of HRI components such as robot behaviours need to be made more accessible. This Project aims to develop and evaluate a system with a suitably high level of abstraction so that novices can create high level behaviours or tasks without having to deal with complexities such as low-level hardware control or the myriad of individual behaviours that combine to form seemingly straightforward actions. The system was developed for use on simulated and physical forms of the Scitos G5, currently operating at the collection museum. Over the course of the project, two evaluations have been performed. A study on the behaviours exhibited by the robot and a linear version of the visual programming interface, and then workload and usability associated with the system interface after making more high-level blocks and programming constructs available. The study at the collection museum was used to inform the other study taking place using simulated sessions with the robot. The second study was an evaluation of a more complex version of the system. Across both studies, the system was found to be favourable according to SUS responses, with TLX responses showing that the mental demand for the system was low in relation to other dimensions. The latter study showed that that some people found that elements of the visual programming were slightly confusing, however the functionality of blocks and their relation to performing certain actions were easily learned. Results suggest further study may be useful with future development of this or similar systems in wider contexts

    Comics, robots, fashion and programming: outlining the concept of actDresses

    Get PDF
    This paper concerns the design of physical languages for controlling and programming robotic consumer products. For this purpose we explore basic theories of semiotics represented in the two separate fields of comics and fashion, and how these could be used as resources in the development of new physical languages. Based on these theories, the design concept of actDresses is defined, and supplemented by three example scenarios of how the concept can be used for controlling, programming, and predicting the behaviour of robotic systems

    Introductory programming: a systematic literature review

    Get PDF
    As computing becomes a mainstream discipline embedded in the school curriculum and acts as an enabler for an increasing range of academic disciplines in higher education, the literature on introductory programming is growing. Although there have been several reviews that focus on specific aspects of introductory programming, there has been no broad overview of the literature exploring recent trends across the breadth of introductory programming. This paper is the report of an ITiCSE working group that conducted a systematic review in order to gain an overview of the introductory programming literature. Partitioning the literature into papers addressing the student, teaching, the curriculum, and assessment, we explore trends, highlight advances in knowledge over the past 15 years, and indicate possible directions for future research

    Programming Robots for Activities of Everyday Life

    Get PDF
    Text-based programming remains a challenge to novice programmers in\ua0all programming domains including robotics. The use of robots is gainingconsiderable traction in several domains since robots are capable of assisting\ua0humans in repetitive and hazardous tasks. In the near future, robots willbe used in tasks of everyday life in homes, hotels, airports, museums, etc.\ua0However, robotic missions have been either predefined or programmed usinglow-level APIs, making mission specification task-specific and error-prone.\ua0To harness the full potential of robots, it must be possible to define missionsfor specific applications domains as needed. The specification of missions of\ua0robotic applications should be performed via easy-to-use, accessible ways, and\ua0at the same time, be accurate, and unambiguous. Simplicity and flexibility in\ua0programming such robots are important, since end-users come from diverse\ua0domains, not necessarily with suffcient programming knowledge.The main objective of this licentiate thesis is to empirically understand the\ua0state-of-the-art in languages and tools used for specifying robot missions byend-users. The findings will form the basis for interventions in developing\ua0future languages for end-user robot programming.During the empirical study, DSLs for robot mission specification were\ua0analyzed through published literature, their websites, user manuals, samplemissions and using the languages to specify missions for supported robots.After extracting data from 30 environments, 133 features were identified.\ua0A feature matrix mapping the features to the environments was developedwith a feature model for robotic mission specification DSLs.Our results show that most end-user facing environments exist in the\ua0education domain for teaching novice programmers and STEM subjects. Mostof the visual languages are developed using Blockly and Scratch libraries.\ua0The end-user domain abstraction needs more work since most of the visualenvironments abstract robotic and programming language concepts but not\ua0end-user concepts. In future works, it is important to focus on the development\ua0of reusable libraries for end-user concepts; and further, explore how end-user\ua0facing environments can be adapted for novice programmers to learn\ua0general programming skills and robot programming in low resource settings\ua0in developing countries, like Uganda

    Block-Based Development of Mobile Learning Experiences for the Internet of Things

    Get PDF
    The Internet of Things enables experts of given domains to create smart user experiences for interacting with the environment. However, development of such experiences requires strong programming skills, which are challenging to develop for non-technical users. This paper presents several extensions to the block-based programming language used in App Inventor to make the creation of mobile apps for smart learning experiences less challenging. Such apps are used to process and graphically represent data streams from sensors by applying map-reduce operations. A workshop with students without previous experience with Internet of Things (IoT) and mobile app programming was conducted to evaluate the propositions. As a result, students were able to create small IoT apps that ingest, process and visually represent data in a simpler form as using App Inventor's standard features. Besides, an experimental study was carried out in a mobile app development course with academics of diverse disciplines. Results showed it was faster and easier for novice programmers to develop the proposed app using new stream processing blocks.Spanish National Research Agency (AEI) - ERDF fund

    Human-robot Interaction For Multi-robot Systems

    Get PDF
    Designing an effective human-robot interaction paradigm is particularly important for complex tasks such as multi-robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that the robots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not automatically improve performance due to the difficulty of teleoperating mobile robots with manipulators. The human operator’s concentration is divided not only among multiple robots but also between controlling each robot’s base and arm. This complexity substantially increases the potential neglect time, since the operator’s inability to effectively attend to each robot during a critical phase of the task leads to a significant degradation in task performance. There are several proven paradigms for increasing the efficacy of human-robot interaction: 1) multimodal interfaces in which the user controls the robots using voice and gesture; 2) configurable interfaces which allow the user to create new commands by demonstrating them; 3) adaptive interfaces which reduce the operator’s workload as necessary through increasing robot autonomy. This dissertation presents an evaluation of the relative benefits of different types of user interfaces for multi-robot systems composed of robots with wheeled bases and three degree of freedom arms. It describes a design for constructing low-cost multi-robot manipulation systems from off the shelf parts. User expertise was measured along three axes (navigation, manipulation, and coordination), and participants who performed above threshold on two out of three dimensions on a calibration task were rated as expert. Our experiments reveal that the relative expertise of the user was the key determinant of the best performing interface paradigm for that user, indicating that good user modiii eling is essential for designing a human-robot interaction system that will be used for an extended period of time. The contributions of the dissertation include: 1) a model for detecting operator distraction from robot motion trajectories; 2) adjustable autonomy paradigms for reducing operator workload; 3) a method for creating coordinated multi-robot behaviors from demonstrations with a single robot; 4) a user modeling approach for identifying expert-novice differences from short teleoperation traces
    • …
    corecore