1,567 research outputs found

    Integrating mobile robotics and vision with undergraduate computer science

    Get PDF
    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision, and is directly linked to the research conducted at the authors’ institution. The paper describes the most relevant details of the module content and assessment strategy, paying particular attention to the practical sessions using Rovio mobile robots. The specific choices are discussed that were made with regard to the mobile platform, software libraries and lab environment. The paper also presents a detailed qualitative and quantitative analysis of student results, including the correlation between student engagement and performance, and discusses the outcomes of this experience

    Perceptual Learning and Abstraction in Machine Learning : an application to autonomous robotics

    Get PDF
    This paper deals with the possible benefits of Perceptual Learning in Artificial Intelligence. On the one hand, Perceptual Learning is more and more studied in neurobiology and is now considered as an essential part of any living system. In fact, Perceptual Learning and Cognitive Learning are both necessary for learning and often depends on each other. On the other hand, many works in Machine Learning are concerned with "Abstraction" in order to reduce the amount of complexity related to some learning tasks. In the Abstraction framework, Perceptual Learning can be seen as a specific process that learns how to transform the data before the traditional learning task itself takes place. In this paper, we argue that biologically-inspired Perceptual Learning mechanisms could be used to build efficient low-level Abstraction operators that deal with real world data. To illustrate this, we present an application where perceptual learning inspired meta-operators are used to perform an abstraction on an autonomous robot visual perception. The goal of this work is to enable the robot to learn how to identify objects it encounters in its environment

    Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction

    Full text link
    We develop a natural language interface for human robot interaction that implements reasoning about deep semantics in natural language. To realize the required deep analysis, we employ methods from cognitive linguistics, namely the modular and compositional framework of Embodied Construction Grammar (ECG) [Feldman, 2009]. Using ECG, robots are able to solve fine-grained reference resolution problems and other issues related to deep semantics and compositionality of natural language. This also includes verbal interaction with humans to clarify commands and queries that are too ambiguous to be executed safely. We implement our NLU framework as a ROS package and present proof-of-concept scenarios with different robots, as well as a survey on the state of the art

    Middleware platform for distributed applications incorporating robots, sensors and the cloud

    Get PDF
    Cyber-physical systems in the factory of the future will consist of cloud-hosted software governing an agile production process executed by autonomous mobile robots and controlled by analyzing the data from a vast number of sensors. CPSs thus operate on a distributed production floor infrastructure and the set-up continuously changes with each new manufacturing task. In this paper, we present our OSGibased middleware that abstracts the deployment of servicebased CPS software components on the underlying distributed platform comprising robots, actuators, sensors and the cloud. Moreover, our middleware provides specific support to develop components based on artificial neural networks, a technique that recently became very popular for sensor data analytics and robot actuation. We demonstrate a system where a robot takes actions based on the input from sensors in its vicinity

    Using and evaluating the real-time spatial perception system hydra in real-world scenarios

    Get PDF
    Hydra is a real-time machine perception system released open source in 2022 as a package for Robot Operating System (ROS). Machine perception systems like Hydra may play a role in the engineering of the next generation of spatial AIs for autonomous robots. Hydra is in the preliminary stages of its existence and does not come with intrinsic support for running on custom datasets. This thesis primarily aims to find out whether the promised capabilities of Hydra can be replicated. As well as to establish a workflow and guidelines for what modifications to Hydra are needed to successfully run it

    Reducing the Barrier to Entry of Complex Robotic Software: a MoveIt! Case Study

    Full text link
    Developing robot agnostic software frameworks involves synthesizing the disparate fields of robotic theory and software engineering while simultaneously accounting for a large variability in hardware designs and control paradigms. As the capabilities of robotic software frameworks increase, the setup difficulty and learning curve for new users also increase. If the entry barriers for configuring and using the software on robots is too high, even the most powerful of frameworks are useless. A growing need exists in robotic software engineering to aid users in getting started with, and customizing, the software framework as necessary for particular robotic applications. In this paper a case study is presented for the best practices found for lowering the barrier of entry in the MoveIt! framework, an open-source tool for mobile manipulation in ROS, that allows users to 1) quickly get basic motion planning functionality with minimal initial setup, 2) automate its configuration and optimization, and 3) easily customize its components. A graphical interface that assists the user in configuring MoveIt! is the cornerstone of our approach, coupled with the use of an existing standardized robot model for input, automatically generated robot-specific configuration files, and a plugin-based architecture for extensibility. These best practices are summarized into a set of barrier to entry design principles applicable to other robotic software. The approaches for lowering the entry barrier are evaluated by usage statistics, a user survey, and compared against our design objectives for their effectiveness to users

    Decentralized collaborative transport of fabrics using micro-UAVs

    Full text link
    Small unmanned aerial vehicles (UAVs) have generally little capacity to carry payloads. Through collaboration, the UAVs can increase their joint payload capacity and carry more significant loads. For maximum flexibility to dynamic and unstructured environments and task demands, we propose a fully decentralized control infrastructure based on a swarm-specific scripting language, Buzz. In this paper, we describe the control infrastructure and use it to compare two algorithms for collaborative transport: field potentials and spring-damper. We test the performance of our approach with a fleet of micro-UAVs, demonstrating the potential of decentralized control for collaborative transport.Comment: Submitted to 2019 International Conference on Robotics and Automation (ICRA). 6 page
    corecore