60 research outputs found

    Distributed Optimal State Consensus for Multiple Circuit Systems with Disturbance Rejection

    Get PDF
    This paper investigates the distributed optimal state consensus problem for an electronic system with a group of circuit units, where the dynamics of each unit is modeled by a Chua's circuit in the presence of disturbance generated by an external system. By means of the internal model approach and feedback control, a compensator-based continuous-time algorithm is proposed to minimize the sum of all cost functions associated with each individual unit in a cooperative manner. Supported by convex analysis, graph theory and Lyapunov theory, it is proved that the proposed algorithm is exponentially convergent. Compared with the centralized algorithms, the proposed protocol possesses remarkable superiority in improving scalability and reliability of multiple circuit systems. Moreover, we also study the distributed uncertain optimal state consensus problem and a linear regret bound is obtained in this case. Finally, a state synchronization example is provided to validate the effectiveness of the proposed algorithms

    High latency unmanned ground vehicle teleoperation enhancement by presentation of estimated future through video transformation

    Get PDF
    Long-distance, high latency teleoperation tasks are difficult, highly stressful for teleoperators, and prone to over-corrections, which can lead to loss of control. At higher latencies, or when teleoperating at higher vehicle speed, the situation becomes progressively worse. To explore potential solutions, this research work investigates two 2D visual feedback-based assistive interfaces (sliding-only and sliding-and-zooming windows) that apply simple but effective video transformations to enhance teleoperation. A teleoperation simulator that can replicate teleoperation scenarios affected by high and adjustable latency has been developed to explore the effectiveness of the proposed assistive interfaces. Three image comparison metrics have been used to fine-tune and optimise the proposed interfaces. An operator survey was conducted to evaluate and compare performance with and without the assistance. The survey has shown that a 900ms latency increases task completion time by up to 205% for an on-road and 147 % for an off-road driving track. Further, the overcorrection-induced oscillations increase by up to 718 % with this level of latency. The survey has shown the sliding-only video transformation reduces the task completion time by up to 25.53 %, and the sliding-and-zooming transformation reduces the task completion time by up to 21.82 %. The sliding-only interface reduces the oscillation count by up to 66.28 %, and the sliding-and-zooming interface reduces it by up to 75.58 %. The qualitative feedback from the participants also shows that both types of assistive interfaces offer better visual situational awareness, comfort, and controllability, and significantly reduce the impact of latency and intermittency on the teleoperation task

    Long future frame prediction using optical flow informed deep neural networks for enhancement of robotic teleoperation in high latency environments

    Get PDF
    High latency in teleoperation has a significant negative impact on operator performance. While deep learning has revolutionized many domains recently, it has not previously been applied to teleoperation enhancement. We propose a novel approach to predict video frames deep into the future using neural networks informed by synthetically generated optical flow information. This can be employed in teleoperated robotic systems that rely on video feeds for operator situational awareness. We have used the image-to-image translation technique as a basis for the prediction of future frames. The Pix2Pix conditional generative adversarial network (cGAN) has been selected as a base network. Optical flow components reflecting real-time control inputs are added to the standard RGB channels of the input image. We have experimented with three data sets of 20,000 input images each that were generated using our custom-designed teleoperation simulator with a 500-ms delay added between the input and target frames. Structural Similarity Index Measures (SSIMs) of 0.60 and Multi-SSIMs of 0.68 were achieved when training the cGAN with three-channel RGB image data. With the five-channel input data (incorporating optical flow) these values improved to 0.67 and 0.74, respectively. Applying Fleiss\u27 κ gave a score of 0.40 for three-channel RGB data, and 0.55 for five-channel optical flow-added data. We are confident the predicted synthetic frames are of sufficient quality and reliability to be presented to teleoperators as a video feed that will enhance teleoperation. To the best of our knowledge, we are the first to attempt to reduce the impacts of latency through future frame prediction using deep neural networks

    Event-triggered Consensus Frameworks for Multi-agent Systems

    Get PDF
    Recently, distributed multi-agent systems (MAS) have been widely studied for a variety of engineering applications, including cooperative vehicular systems, sensor networks, and electrical power grids. To solve the allocated tasks in MASs, each agent autonomously determines the appropriate actions using information available locally and received from its neighbours. Many cooperative behaviours in MAS are based on a consensus algorithm. Consensus, by definition, is to distributively agree on a parameter of interest between the agents. Depending on the application, consensus has different configurations such as leader-following, formation, synchronization in robotic arms, and state estimation in sensor networks. Consensus in MASs requires local measurements and information exchanges between the neighbouring agents. Due to the energy restriction, hardware limitation, and bandwidth constraint, strategies that reduce the amount of measurements and information exchanges between the agents are of paramount interest. Event-triggering transmission schemes are among the most recent strategies that efficiently reduce the number of transmissions. This dissertation proposes a number of event-triggered consensus (ETC) implementations which are applicable to MASs. Different performance objectives and physical constraints, such as a desired convergence rate, robustness to uncertainty in control realization, information quantization, sampled-data processing, and resilience to denial of service (DoS) attacks are included in realization of the proposed algorithms. A novel convex optimization is proposed which simultaneously designs the control and event-triggering parameters in a unified framework. The optimization governs the trade-off between the consensus convergence rate and intensity of transmissions. This co-design optimization is extended to an advanced class of event-triggered schemes, known as the dynamic event-triggering (DET), which is able to substantially reduce the amount of transmissions. In the presence of DoS attacks, the co-design optimization simultaneously computes the control and DET parameters so that the number of transmissions is reduced and a desired level of resilience to DoS is guaranteed. In addition to consensus, a formation-containment implementation is proposed, where the amount of transmissions are reduced using the DET schemes. The performance of the proposed implementations are evaluated through simulation over several MASs. The experimental results demonstrate the effectiveness of the proposed implementations and verify their design flexibility

    Control law and state estimators design for multi-agent system with reduction of communications by event-triggered approach

    Get PDF
    A large amount of research work has been recently dedicated to the study of Multi-Agent System and cooperative control. Applications to mobile robots, like unmanned air vehicles (UAVs), satellites, or aircraft have been tackled to insure complex mission such as exploration or surveillance. However, cooperative tasking requires communication between agents, and for a large number of agents, the number of communication exchanges may lead to network saturation, increased delays or loss of transferred packets, from the interest in reducing them. In event-triggered strategy, a communication is broadcast when a condition, based on chosen parameters and some threshold, is fulfilled. The main difficulty consists in determining the communication triggering condition (CTC) that will ensure the completion of the task assigned to the MAS. In a distributed strategy, each agent maintains an estimate value of others agents state to replace missing information due to limited communication. This thesis focuses on the development of distributed control laws and estimators for multi-agent system to limit the number of communication by using event-triggered strategy in the presence of perturbation with two main topics, i.e. consensus and formation control. The first part addresses the problem of distributed event-triggered communications for consensus of a multi-agent system with both general linear dynamics and state perturbations. To decrease the amount of required communications, an accurate estimator of the agent states is introduced, coupled with an estimator of the estimation error, and adaptation of communication protocol. By taking into account the control input of the agents, the proposed estimator allows to obtain a consensus with fewer communications than those obtained by a reference method. The second part proposes a strategy to reduce the number of communications for displacement-based formation control while following a desired reference trajectory. Agent dynamics are described by Euler-Lagrange models with perturbations and uncertainties on the model parameters. Several estimator structures are proposed to rebuilt missing information. The proposed distributed communication triggering condition accounts for inter-agent displacements and the relative discrepancy between actual and estimated agent states. A single a priori trajectory has to be evaluated to follow the desired path. Effect of state perturbations on the formation and on the communications are analyzed. Finally, the proposed methods have been adapted to consider packet dropouts and communication delays. For both type

    Problems in Control, Estimation, and Learning in Complex Robotic Systems

    Get PDF
    In this dissertation, we consider a range of different problems in systems, control, and learning theory and practice. In Part I, we look at problems in control of complex networks. In Chapter 1, we consider the performance analysis of a class of linear noisy dynamical systems. In Chapter 2, we look at the optimal design problems for these networks. In Chapter 3, we consider dynamical networks where interactions between the networks occur randomly in time. And in the last chapter of this part, in Chapter 4, we look at dynamical networks wherein coupling between the subsystems (or agents) changes nonlinearly based on the difference between the state of the subsystems. In Part II, we consider estimation problems wherein we deal with a large body of variables (i.e., at large scale). This part starts with Chapter 5, in which we consider the problem of sampling from a dynamical network in space and time for initial state recovery. In Chapter 6, we consider a similar problem with the difference that the observations instead of point samples become continuous observations that happen in Lebesgue measurable observations. In Chapter 7, we consider an estimation problem in which the location of a robot during the navigation is estimated using the information of a large number of surrounding features and we would like to select the most informative features using an efficient algorithm. In Part III, we look at active perception problems, which are approached using reinforcement learning techniques. This part starts with Chapter 8, in which we tackle the problem of multi-agent reinforcement learning where the agents communicate and classify as a team. In Chapter 9, we consider a single agent version of the same problem, wherein a layered architecture replaces the architectures of the previous chapter. Then, we use reinforcement learning to design the meta-layer (to select goals), action-layer (to select local actions), and perception-layer (to conduct classification)
    corecore