1,984 research outputs found
Managing a Fleet of Autonomous Mobile Robots (AMR) using Cloud Robotics Platform
In this paper, we provide details of implementing a system for managing a
fleet of autonomous mobile robots (AMR) operating in a factory or a warehouse
premise. While the robots are themselves autonomous in its motion and obstacle
avoidance capability, the target destination for each robot is provided by a
global planner. The global planner and the ground vehicles (robots) constitute
a multi agent system (MAS) which communicate with each other over a wireless
network. Three different approaches are explored for implementation. The first
two approaches make use of the distributed computing based Networked Robotics
architecture and communication framework of Robot Operating System (ROS) itself
while the third approach uses Rapyuta Cloud Robotics framework for this
implementation. The comparative performance of these approaches are analyzed
through simulation as well as real world experiment with actual robots. These
analyses provide an in-depth understanding of the inner working of the Cloud
Robotics Platform in contrast to the usual ROS framework. The insight gained
through this exercise will be valuable for students as well as practicing
engineers interested in implementing similar systems else where. In the
process, we also identify few critical limitations of the current Rapyuta
platform and provide suggestions to overcome them.Comment: 14 pages, 15 figures, journal pape
Reducing Object Detection Uncertainty from RGB and Thermal Data for UAV Outdoor Surveillance
Recent advances in Unmanned Aerial Vehicles (UAVs) have resulted in their
quick adoption for wide a range of civilian applications, including precision
agriculture, biosecurity, disaster monitoring and surveillance. UAVs offer
low-cost platforms with flexible hardware configurations, as well as an
increasing number of autonomous capabilities, including take-off, landing,
object tracking and obstacle avoidance. However, little attention has been paid
to how UAVs deal with object detection uncertainties caused by false readings
from vision-based detectors, data noise, vibrations, and occlusion. In most
situations, the relevance and understanding of these detections are delegated
to human operators, as many UAVs have limited cognition power to interact
autonomously with the environment. This paper presents a framework for
autonomous navigation under uncertainty in outdoor scenarios for small UAVs
using a probabilistic-based motion planner. The framework is evaluated with
real flight tests using a sub 2 kg quadrotor UAV and illustrated in victim
finding Search and Rescue (SAR) case study in a forest/bushland. The navigation
problem is modelled using a Partially Observable Markov Decision Process
(POMDP), and solved in real time onboard the small UAV using Augmented Belief
Trees (ABT) and the TAPIR toolkit. Results from experiments using colour and
thermal imagery show that the proposed motion planner provides accurate victim
localisation coordinates, as the UAV has the flexibility to interact with the
environment and obtain clearer visualisations of any potential victims compared
to the baseline motion planner. Incorporating this system allows optimised UAV
surveillance operations by diminishing false positive readings from
vision-based object detectors
HUMAN-AI COLLABORATION IN ORGANISATIONS: A LITERATURE REVIEW ON ENABLING VALUE CREATION
The augmentation of human intellect and capability with artificial intelligence is integral to the advancement of next generation human-machine collaboration technologies designed to drive performance improvement and innovation. Yet we have limited understanding of how organisations can translate this potential into creating sustainable business value. We conduct an in-depth literature review of interdisciplinary research on the challenges and opportunities in organisational adoption of human-AI collaboration for value creation. We identify five positions central to how organisations can integrate and align the socio-technical challenges of augmented collaboration, namely strategic positioning, human engagement, organisational evolution, technology development and intelligence building. We synthesise the findings by means of an integrated model that focuses organisations on building the requisite internal microfoundations for the systematic management of augmented systems
Porting Computer Vision Models to the Edge for Smart City Applications: Enabling Autonomous Vision-Based Power Line Inspection at the Smart Grid Edge for Unmanned Aerial Vehicles (UAVs)
Smart grid infrastructure must be monitored and inspected - especially when subject to harsh operating conditions in extreme, remote environments such as the highlands of Iceland. Current methods for monitoring such critical infrastructure includes manual inspection, static video analysis (where connectivity is available) and unmanned aerial vehicle (UAV) inspection. UAVs offer certain inspection efficiencies; however, challenges persist given the time and UAV operator skill required. Collaborating with Landsnet, the Icelandic smart grid operator, we apply convolutional neural networks for image processing to detect smart grid transmission infrastructure and modify the resulting computer vision (CV) model to function on the edge of a UAV. In doing so, we overcome significant edge processing barriers. Our real-time CV model delivers decision insight on the UAV edge and enables autonomous flight path planning for use in smart grid inspection. Our approach is transferable to other smart city applications that could benefit from edge-based monitoring and inspection
Recommended from our members
Real-time robotic tasks for cyber-physical avatars
Although modern robots can perform complex tasks using sophisticated algorithms that are specialized to a particular task and environment, creating robots capable of completing tasks in unstructured environments without human guidance (e.g., through teleoperation) remains a challenge. In this research, we present a framework to meet this challenge for a "cyberphysical avatar," which is defined to be a semi-autonomous robotic system that adjusts to an unstructured environment and performs physical tasks subject to critical timing constraints while under human supervision. This thesis first realizes a cyberphysical avatar that integrates three key technologies: (1) whole body-compliant control, (2) skill acquisition from machine learning (neuroevolution methods and deep learning), and (3) vision-based control through visual servoing. Body-compliant control is essential for operator safety because avatars perform cooperative tasks in close proximity to humans; machine learning enables "programming" avatars such that they can be used by non-experts for a large array of tasks, some unforeseen, in an unstructured environment; the visual servoing technique is indispensable for facilitating feedback control in human avatar interaction. This thesis proposes and demonstrates a systematically incremental approach to automating robotic tasks by decomposing a non-trivial task into stages, each of which may be automated by integrating the aforementioned techniques. We design and implement the controllers for two semi-autonomous robots that integrate three key techniques for grasping and pick-and-place tasks. While a general theory is beyond reach, we present a study on the tradeoffs between three design metrics for robotic task systems: (1) the amount of training effort for the robots to perform the task, (2) the time available to complete the task when the command is given, and (3) the quality of the result of the performed task. The tradeoff study in this design space uses the imprecise computation model as a framework to evaluate specific types of tasks: (1) grasping an unknown object and (2) placing the object in a target position. We demonstrate the generality of our integration methodology by applying it to two different robots, Dreamer and Hoppy. Our approach is evaluated by the performance of the robots in trading off between task completion time, training time and task completion success rate, in an environment similar to those in the recent Amazon Picking Challenge.Computer Science
What Makes AI ‘Intelligent’ and ‘Caring’?:Exploring Affect and Relationality Across Three Sites of Intelligence and Care
This research was funded in whole by the Wellcome Trust [Seed Award ‘AI and Health’ 213643/Z/18/Z]. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. The authors would like to thank Dr Jane Hopton for inspiring discussions about AI and dimensions of intelligence, and three anonymous reviewers as well as the editor in chief Dr Timmemans at Social Science and Medicine for their very helpful and constructive feedback.Peer reviewedPublisher PD
- …