803 research outputs found
Learning Motion Predictors for Smart Wheelchair using Autoregressive Sparse Gaussian Process
Constructing a smart wheelchair on a commercially available powered
wheelchair (PWC) platform avoids a host of seating, mechanical design and
reliability issues but requires methods of predicting and controlling the
motion of a device never intended for robotics. Analog joystick inputs are
subject to black-box transformations which may produce intuitive and adaptable
motion control for human operators, but complicate robotic control approaches;
furthermore, installation of standard axle mounted odometers on a commercial
PWC is difficult. In this work, we present an integrated hardware and software
system for predicting the motion of a commercial PWC platform that does not
require any physical or electronic modification of the chair beyond plugging
into an industry standard auxiliary input port. This system uses an RGB-D
camera and an Arduino interface board to capture motion data, including visual
odometry and joystick signals, via ROS communication. Future motion is
predicted using an autoregressive sparse Gaussian process model. We evaluate
the proposed system on real-world short-term path prediction experiments.
Experimental results demonstrate the system's efficacy when compared to a
baseline neural network model.Comment: The paper has been accepted to the International Conference on
Robotics and Automation (ICRA2018
Runtime resource management for vision-based applications in mobile robots
Computer-vision (CV) applications are an important part of mobile robot automation, analyzing the perceived raw data from vision sensors and providing a rich amount of information on the surrounding environment. The design of a high-speed and energy-efficient CV application for a resource-constrained mobile robot, while maintaining a certain targeted level of accuracy in computation, is a challenging task. This is because such applications demand a lot of resources, e.g. computing capacity and battery energy, to run seamlessly in real time. Moreover, there is always a trade-off between accuracy, performance and energy consumption, as these factors dynamically affect each other at runtime. In this thesis, we investigate novel runtime resource management approaches to improve performance and energy efficiency of vision-based applications in mobile robots. Due to the dynamic correlation between different management objectives, such as energy consumption and execution time, both environmental and computational observations need to be dynamically updated, and the actuators are manipulated at runtime based on these observations. Algorithmic and computational parameters of a CV application (output accuracy and CPU voltage/frequency) are adjusted by measuring the key factors associated with the intensity of computations and strain on CPUs (environmental complexity and instantaneous power). Furthermore, we show how mechanical characteristics of the robot, i.e. the speed of movement in this thesis, can affect the computational behaviour. Based on this investigation, we add the speed of a robot, as an actuator, to our resource management algorithm besides the considered computational knobs (output accuracy and CPU voltage/frequency). To evaluate the proposed approach, we perform several experiments on an unmanned ground vehicle equipped with an embedded computer board and use RGB and event cameras as the vision sensors for CV applications. The obtained results show that the presented management strategy improves the performance and accuracy of vision-based applications while significantly reducing the energy consumption compared with the state-of-the-art solutions. Moreover, we demonstrate that considering simultaneously both computational and mechanical aspects in management of CV applications running on mobile robots significantly reduces the energy consumption compared with similar methods that consider these two aspects separately, oblivious to each other’s outcome
- …