59 research outputs found
Probabilistic snapshot GNSS for low-cost wildlife tracking
Snapshot GNSS is more energy-efficient than conventional localisation methods based on global navigation satellite systems (GNSS), like the GPS. This is beneficial for long deployments on battery such as in wildlife tracking. However, only a few snapshot GNSS systems that could be used for wildlife tracking have been presented and all have disadvantages. Most significantly, they are closed-source and either not available or expensive. A reason is that they typically require GNSS signals to be captured with good resolution, which demands complex receiver hardware capable of capturing multi-bit data at sampling rates of 16 MHz and more. By contrast, this thesis presents fast algorithms that reliably estimate locations from twelve-millisecond signals that are sampled at just 4 MHz and quantised with only a single bit. This allows to build a snapshot receiver at an unmatched low cost of less than $30 and with particularly low power consumption, outperforming existing systems and enabling low-budget and long-term field work. The system can acquire two positions per hour for a year on a tiny 40 mAh battery. On a challenging public dataset with thousands of snapshots from real-world scenarios, median accuracy is 11 m, comparable to more complex and expensive solutions with higher energy consumption. Additionally, the system has been deployed for several wildlife tracking studies, including on sea turtles, where brief signal acquisition times are crucial to obtain location fixes during surfacing events lasting only 1–2 s. For the first time, (i) snapshot GNSS receiver hardware and (ii) an accompanying cloud-based processing platform are open-source. This allowed several third parties to independently replicate the system. In total, several hundred receivers have been built and millions of locations estimated for those. As three additional contributions, this thesis presents (i) the first evaluation of snapshot GNSS for wildlife tracking across a variety of species and habitats, (ii) the first snapshot GNSS system with cloud-offloading via a low-power narrow-band cellular connection, and (iii) a demonstration of the potential of smoothing for snapshot GNSS. A final contribution are factor graph optimisation algorithms to (i) smooth snapshot GNSS data and (ii) tightly fuse raw GNSS data with inertial measurements and, optionally, lidar observations for precise and smooth localisation. In several environments with little sky visibility, such as a forest, the accuracy of the fused location estimates in the global Earth frame is still 1–2 m, while the estimated trajectories are discontinuity-free and smooth. This requires a professional-grade (non-snapshot) GNSS receiver, but, unlike traditional differential GNSS, no connection to a base station
System Optimisation for Multi-access Edge Computing Based on Deep Reinforcement Learning
Multi-access edge computing (MEC) is an emerging and important distributed computing paradigm that aims to extend cloud service to the network edge to reduce network traffic and service latency. Proper system optimisation and maintenance are crucial to maintaining high Quality-of-service (QoS) for end-users. However, with the increasing complexity of the architecture of MEC and mobile applications, effectively optimising MEC systems is non-trivial. Traditional optimisation methods are generally based on simplified mathematical models and fixed heuristics, which rely heavily on expert knowledge. As a consequence, when facing dynamic MEC scenarios, considerable human efforts and expertise are required to redesign the model and tune the heuristics, which is time-consuming.
This thesis aims to develop deep reinforcement learning (DRL) methods to handle system optimisation problems in MEC. Instead of developing fixed heuristic algorithms for the problems, this thesis aims to design DRL-based methods that enable systems to learn optimal solutions on their own. This research demonstrates the effectiveness of DRL-based methods on two crucial system optimisation problems: task offloading and service migration. Specifically, this thesis first investigate the dependent task offloading problem that considers the inner dependencies of tasks. This research builds a DRL-based method combining sequence-to-sequence (seq2seq) neural network to address the problem. Experiment results demonstrate that our method outperforms the existing heuristic algorithms and achieves near-optimal performance. To further enhance the learning efficiency of the DRL-based task offloading method for unseen learning tasks, this thesis then integrates meta reinforcement learning to handle the task offloading problem. Our method can adapt fast to new environments with a small number of gradient updates and samples. Finally, this thesis exploits the DRL-based solution for the service migration problem in MEC considering user mobility. This research models the service migration problem as a Partially Observable Markov Decision Process (POMDP) and propose a tailored actor-critic algorithm combining Long-short Term Memory (LSTM) to solve the POMDP. Results from extensive experiments based on real-world mobility traces demonstrate that our method consistently outperforms both the heuristic and state-of-the-art learning-driven algorithms on various MEC scenarios
Recommended from our members
Design and Optimization of Mobile Cloud Computing Systems with Networked Virtual Platforms
A Mobile Cloud Computing (MCC) system is a cloud-based system that is accessed by the users through their own mobile devices. MCC systems are emerging as the product of two technology trends: 1) the migration of personal computing from desktop to mobile devices and 2) the growing integration of large-scale computing environments into cloud systems. Designers are developing a variety of new mobile cloud computing systems. Each of these systems is developed with different goals and under the influence of different design constraints, such as high network latency or limited energy supply.
The current MCC systems rely heavily on Computation Offloading, which however incurs new problems such as scalability of the cloud, privacy concerns due to storing personal information on the cloud, and high energy consumption on the cloud data centers. In this dissertation, I address these problems by exploring different options in the distribution of computation across different computing nodes in MCC systems. My thesis is that "the use of design and simulation tools optimized for design space exploration of the MCC systems is the key to optimize the distribution of computation in MCC."
For a quantitative analysis of mobile cloud computing systems through design space exploration, I have developed netShip, the first generation of an innovative design and simulation tool, that offers large scalability and heterogeneity support. With this tool system designers and software programmers can efficiently develop, optimize, and validate large-scale, heterogeneous MCC systems. I have enhanced netShip to support the development of ever-evolving MCC applications with a variety of emerging needs including the fast simulation of new devices, e.g., Internet-of-Things devices, and accelerators, e.g., mobile GPUs. Leveraging netShip, I developed three new MCC systems where I applied three variations of a new computation distributing technique, called Reverse Offloading. By more actively leveraging the computational power on mobile devices, the MCC systems can reduce the total execution times, the burden of concentrated computations on the cloud, and the privacy concerns about storing personal information available in the cloud. This approach also creates opportunities for new services by utilizing the information available on the mobile device instead of accessing the cloud.
Throughout my research I have enabled the design optimization of mobile applications and cloud-computing platforms. In particular, my design tool for MCC systems becomes a vehicle to optimize not only the performance but also the energy dissipation, an aspect of critical importance for any computing system
A distributed framework for the control and cooperation of heterogeneous mobile robots in smart factories.
Doctoral Degree. University of KwaZulu-Natal, Durban.The present consumer market is driven by the mass customisation of products. Manufacturers are now challenged with the problem of not being able to capture market share and gain higher profits by producing large volumes of the same product to a mass market. Some businesses have implemented mass customisation manufacturing (MCM) techniques as a solution to this problem, where customised products are produced rapidly while keeping the costs at a mass production level. In addition to this, the arrival of the fourth industrial revolution (Industry 4.0) enables the possibility of establishing the decentralised intelligence of embedded devices to detect and respond to real-time variations in the MCM factory.
One of the key pillars in the Industry 4.0, smart factory concept is Advanced Robotics. This includes cooperation and control within multiple heterogeneous robot networks, which increases flexibility in the smart factory and enables the ability to rapidly reconfigure systems to adapt to variations in consumer product demand. Another benefit in these systems is the reduction of production bottleneck conditions where robot services must be coordinated efficiently so that high levels of productivity are maintained.
This study focuses on the research, design and development of a distributed framework that would aid researchers in implementing algorithms for controlling the task goals of heterogeneous mobile robots, to achieve robot cooperation and reduce bottlenecks in a production environment. The framework can be used as a toolkit by the end-user for developing advanced algorithms that can be simulated before being deployed in an actual system, thereby fast prototyping the system integration process.
Keywords: Cooperation, heterogeneity, multiple mobile robots, Industry 4.0, smart factory, manufacturing, middleware, ROS, OPC, framework
Systems Engineering: Availability and Reliability
Current trends in Industry 4.0 are largely related to issues of reliability and availability. As a result of these trends and the complexity of engineering systems, research and development in this area needs to focus on new solutions in the integration of intelligent machines or systems, with an emphasis on changes in production processes aimed at increasing production efficiency or equipment reliability. The emergence of innovative technologies and new business models based on innovation, cooperation networks, and the enhancement of endogenous resources is assumed to be a strong contribution to the development of competitive economies all around the world. Innovation and engineering, focused on sustainability, reliability, and availability of resources, have a key role in this context. The scope of this Special Issue is closely associated to that of the ICIE’2020 conference. This conference and journal’s Special Issue is to present current innovations and engineering achievements of top world scientists and industrial practitioners in the thematic areas related to reliability and risk assessment, innovations in maintenance strategies, production process scheduling, management and maintenance or systems analysis, simulation, design and modelling
- …