1,687 research outputs found
A distributed architecture for unmanned aerial systems based on publish/subscribe messaging and simultaneous localisation and mapping (SLAM) testbed
A dissertation submitted in fulfilment for the degree of Master of Science.
School of Computational and Applied Mathematics, University of the Witwatersrand, Johannesburg, South Africa, November 2017The increased capabilities and lower cost of Micro Aerial Vehicles (MAVs) unveil big opportunities for a rapidly growing number of civilian and commercial applications. Some missions require direct control using a receiver in a point-to-point connection, involving one or very few MAVs. An alternative class of mission is remotely controlled, with the control of the drone automated to a certain extent using mission planning software and autopilot systems.
For most emerging missions, there is a need for more autonomous, cooperative control of MAVs, as well as more complex data processing from sensors like cameras and laser scanners. In the last decade, this has given rise to an extensive research from both academia and industry. This research direction applies robotics and computer vision concepts to Unmanned Aerial Systems (UASs). However, UASs are often designed for specific hardware and software, thus providing limited integration, interoperability and re-usability across different missions. In addition, there are numerous open issues related to UAS command, control and communication(C3), and multi-MAVs.
We argue and elaborate throughout this dissertation that some of the recent standardbased publish/subscribe communication protocols can solve many of these challenges and meet the non-functional requirements of MAV robotics applications. This dissertation assesses the MQTT, DDS and TCPROS protocols in a distributed architecture of a UAS control system and Ground Control Station software. While TCPROS has been the leading robotics communication transport for ROS applications, MQTT and DDS are lightweight enough to be used for data exchange between distributed systems of aerial robots. Furthermore, MQTT and DDS are based on industry standards to foster communication interoperability of “things”. Both protocols have been extensively presented to address many of today’s needs related to networks based on the internet of things (IoT). For example, MQTT has been used to exchange data with space probes, whereas DDS was employed for aerospace defence and applications of smart cities.
We designed and implemented a distributed UAS architecture based on each publish/subscribe protocol TCPROS, MQTT and DDS. The proposed communication systems were tested with a vision-based Simultaneous Localisation and Mapping (SLAM) system involving three Parrot AR Drone2 MAVs. Within the context of this study, MQTT and DDS messaging frameworks serve the purpose of abstracting UAS complexity and heterogeneity. Additionally, these protocols are expected to provide low-latency communication and scale up to meet the requirements of real-time remote sensing applications. The most important contribution of this work is the implementation of a complete distributed communication architecture for multi-MAVs. Furthermore, we assess the viability of this architecture and benchmark the performance of the protocols in relation to an autonomous quadcopter navigation testbed composed of a SLAM algorithm, an extended Kalman filter and a PID controller.XL201
Integrated Sensing and Communication for 6G: Ten Key Machine Learning Roles
Integrating sensing and communication is a defining theme for future wireless
systems. This is motivated by the promising performance gains, especially as
they assist each other, and by the better utilization of the wireless and
hardware resources. Realizing these gains in practice, however, is subject to
several challenges where leveraging machine learning can provide a potential
solution. This article focuses on ten key machine learning roles for joint
sensing and communication, sensing-aided communication, and communication-aided
sensing systems, explains why and how machine learning can be utilized, and
highlights important directions for future research. The article also presents
real-world results for some of these machine learning roles based on the
large-scale real-world dataset DeepSense 6G, which could be adopted in
investigating a wide range of integrated sensing and communication problems.Comment: Submitted to IEE
Towards High-Frequency Tracking and Fast Edge-Aware Optimization
This dissertation advances the state of the art for AR/VR tracking systems by
increasing the tracking frequency by orders of magnitude and proposes an
efficient algorithm for the problem of edge-aware optimization.
AR/VR is a natural way of interacting with computers, where the physical and
digital worlds coexist. We are on the cusp of a radical change in how humans
perform and interact with computing. Humans are sensitive to small
misalignments between the real and the virtual world, and tracking at
kilo-Hertz frequencies becomes essential. Current vision-based systems fall
short, as their tracking frequency is implicitly limited by the frame-rate of
the camera. This thesis presents a prototype system which can track at orders
of magnitude higher than the state-of-the-art methods using multiple commodity
cameras. The proposed system exploits characteristics of the camera
traditionally considered as flaws, namely rolling shutter and radial
distortion. The experimental evaluation shows the effectiveness of the method
for various degrees of motion.
Furthermore, edge-aware optimization is an indispensable tool in the computer
vision arsenal for accurate filtering of depth-data and image-based rendering,
which is increasingly being used for content creation and geometry processing
for AR/VR. As applications increasingly demand higher resolution and speed,
there exists a need to develop methods that scale accordingly. This
dissertation proposes such an edge-aware optimization framework which is
efficient, accurate, and algorithmically scales well, all of which are much
desirable traits not found jointly in the state of the art. The experiments
show the effectiveness of the framework in a multitude of computer vision tasks
such as computational photography and stereo.Comment: PhD thesi
Online Graph-Based Change Point Detection in Multiband Image Sequences
The automatic detection of changes or anomalies between multispectral and
hyperspectral images collected at different time instants is an active and
challenging research topic. To effectively perform change-point detection in
multitemporal images, it is important to devise techniques that are
computationally efficient for processing large datasets, and that do not
require knowledge about the nature of the changes. In this paper, we introduce
a novel online framework for detecting changes in multitemporal remote sensing
images. Acting on neighboring spectra as adjacent vertices in a graph, this
algorithm focuses on anomalies concurrently activating groups of vertices
corresponding to compact, well-connected and spectrally homogeneous image
regions. It fully benefits from recent advances in graph signal processing to
exploit the characteristics of the data that lie on irregular supports.
Moreover, the graph is estimated directly from the images using superpixel
decomposition algorithms. The learning algorithm is scalable in the sense that
it is efficient and spatially distributed. Experiments illustrate the detection
and localization performance of the method
Modeling and Control for Vision Based Rear Wheel Drive Robot and Solving Indoor SLAM Problem Using LIDAR
abstract: To achieve the ambitious long-term goal of a feet of cooperating Flexible Autonomous
Machines operating in an uncertain Environment (FAME), this thesis addresses several
critical modeling, design, control objectives for rear-wheel drive ground vehicles.
Toward this ambitious goal, several critical objectives are addressed. One central objective of the thesis was to show how to build low-cost multi-capability robot platform
that can be used for conducting FAME research.
A TFC-KIT car chassis was augmented to provide a suite of substantive capabilities.
The augmented vehicle (FreeSLAM Robot) costs less than 2000.
All demonstrations presented involve rear-wheel drive FreeSLAM robot. The following
summarizes the key hardware demonstrations presented and analyzed:
(1)Cruise (v, ) control along a line,
(2) Cruise (v, ) control along a curve,
(3) Planar (x, y) Cartesian Stabilization for rear wheel drive vehicle,
(4) Finish the track with camera pan tilt structure in minimum time,
(5) Finish the track without camera pan tilt structure in minimum time,
(6) Vision based tracking performance with different cruise speed vx,
(7) Vision based tracking performance with different camera fixed look-ahead distance L,
(8) Vision based tracking performance with different delay Td from vision subsystem,
(9) Manually remote controlled robot to perform indoor SLAM,
(10) Autonomously line guided robot to perform indoor SLAM.
For most cases, hardware data is compared with, and corroborated by, model based
simulation data. In short, the thesis uses low-cost self-designed rear-wheel
drive robot to demonstrate many capabilities that are critical in order to reach the
longer-term FAME goal.Dissertation/ThesisDefense PresentationMasters Thesis Electrical Engineering 201
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …