3,804 research outputs found

    Mobile Robots

    Get PDF
    The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations

    Determining robot actions for tasks requiring sensor interaction

    Get PDF
    The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system

    Computer hardware and software for robotic control

    Get PDF
    The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems

    User needs, benefits and integration of robotic systems in a space station laboratory

    Get PDF
    The methodology, results and conclusions of the User Needs, Benefits, and Integration Study (UNBIS) of Robotic Systems in the Space Station Microgravity and Materials Processing Facility are summarized. Study goals include the determination of user requirements for robotics within the Space Station, United States Laboratory. Three experiments were selected to determine user needs and to allow detailed investigation of microgravity requirements. A NASTRAN analysis of Space Station response to robotic disturbances, and acceleration measurement of a standard industrial robot (Intelledex Model 660) resulted in selection of two ranges of low gravity manipulation: Level 1 (10-3 to 10-5 G at greater than 1 Hz.) and Level 2 (less than = 10-6 G at 0.1 Hz). This included an evaluation of microstepping methods for controlling stepper motors and concluded that an industrial robot actuator can perform milli-G motion without modification. Relative merits of end-effectors and manipulators were studied in order to determine their ability to perform a range of tasks related to the three low gravity experiments. An Effectivity Rating was established for evaluating these robotic system capabilities. Preliminary interface requirements were determined such that definition of requirements for an orbital flight demonstration experiment may be established

    BRAHMS: Novel middleware for integrated systems computation

    Get PDF
    Biological computational modellers are becoming increasingly interested in building large, eclectic models, including components on many different computational substrates, both biological and non-biological. At the same time, the rise of the philosophy of embodied modelling is generating a need to deploy biological models as controllers for robots in real-world environments. Finally, robotics engineers are beginning to find value in seconding biomimetic control strategies for use on practical robots. Together with the ubiquitous desire to make good on past software development effort, these trends are throwing up new challenges of intellectual and technological integration (for example across scales, across disciplines, and even across time) - challenges that are unmet by existing software frameworks. Here, we outline these challenges in detail, and go on to describe a newly developed software framework, BRAHMS. that meets them. BRAHMS is a tool for integrating computational process modules into a viable, computable system: its generality and flexibility facilitate integration across barriers, such as those described above, in a coherent and effective way. We go on to describe several cases where BRAHMS has been successfully deployed in practical situations. We also show excellent performance in comparison with a monolithic development approach. Additional benefits of developing in the framework include source code self-documentation, automatic coarse-grained parallelisation, cross-language integration, data logging, performance monitoring, and will include dynamic load-balancing and 'pause and continue' execution. BRAHMS is built on the nascent, and similarly general purpose, model markup language, SystemML. This will, in future, also facilitate repeatability and accountability (same answers ten years from now), transparent automatic software distribution, and interfacing with other SystemML tools. (C) 2009 Elsevier Ltd. All rights reserved

    Modular development of mobile robots with open source hardware and software components

    Get PDF
    Prototyping and engineering robot hardware and low-level control often require time and efforts thus subtracted to core research activities, such as SLAM or planning algorithms development, which need a working, reliable, platform to be evaluated in a real world scenario. In this paper, we present Rapid Robot Prototyping (R2P), an open source, hardware and software architecture for the rapid prototyping of robotic applications, where off-the-shelf embedded modules (e.g., sensors, actuators, and controllers) are combined together in a plug-and-play fashion, enabling the implementation of a complex system in a simple and modular way. R2P makes people involved in robotics, from researchers and designers to students and hobbyists, dramatically reduce the time and efforts required to build a robot prototype

    A Simulation-Based Layered Framework Framework for the Development of Collaborative Autonomous Systems

    Get PDF
    The purpose of this thesis is to introduce a simulation-based software framework that facilitates the development of collaborative autonomous systems. Significant commonalities exist in the design approaches of both collaborative and autonomous systems, mirroring the sense, plan, act paradigm, and mostly adopting layered architectures. Unfortunately, the development of such systems is intricate and requires low-level interfacing which significantly detracts from development time. Frameworks for the development of collaborative and autonomous systems have been developed but are not flexible and center on narrow ranges of applications and platforms. The proposed framework utilizes an expandable layered structure that allows developers to define a layered structure and perform isolated development on different layers. The framework provides communication capabilities and allows message definition in order to define collaborative behavior across various applications. The framework is designed to be compatible with many robotic platforms and utilizes the concept of robotic middleware in order to interface with robots; attaching the framework on different platforms only requires changing the middleware. An example Fire Brigade application that was developed in the framework is presented; highlighting the design process and utilization of framework related features. The application is simulation-based, relying on kinematic models to simulate physical actions and a virtual environment to provide access to sensor data. While the results demonstrated interesting collaborative behavior, the ease of implementation and capacity to experiment by swapping layers is particularly noteworthy. The framework retains the advantages of layered architectures and provides greater flexibility, shielding developers from intricacies and providing enough tools to make collaboration easy to perform

    Robotic ubiquitous cognitive ecology for smart homes

    Get PDF
    Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work
    corecore