160 research outputs found

    Quasi-Periodic solutions of Two Dimensional Completely Resonant Reversible Schr\"odinger d Systems

    Full text link
    We introduce an abstract KAM (Kolmogorov-Arnold-Moser) theorem for infinite dimensional reversible Schr\"odinger systems. Using this KAM theorem together with partial Birkhoff normal form method, we find the existence of quasi-periodic solutions for a class of completely resonant reversible coupled nonlinear Schr\"odinger d systems on two dimensional torus

    An Ecofeminism Perspective on Economic Globalization: A Pollution Case in China

    Get PDF
    Economic globalization is a double-edged sword. On the one hand it compels more countries to get involved in global production chains; however, on the other hand it also brings many adverse effects in still developing countries. China, as one of the biggest developing countries and an emerging economy, has experienced economic reforms in the past 35 years. As a result, China’s economy has developed quickly. One of the areas where China has paid the price for such fast-paced economic development and rapid industrialization is environmental degradation and pollution. Women largely bear the brunt of the effects of environment pollution, but their voices usually remain at the margins. This research strives to find the role that education can play in balancing economic development and environmental conservation. However, in order to design an appropriate context bound curriculum it is necessary to examine how women workers perceive and experience economic globalization and environmental pollution. Drawing from a conceptual framework based on ecofeminism, women workers were invited to participate in a Photovoice project. They presented their concerns about environmental pollution and economic globalization through their photos. Based on the case research, some suggestions are given on how to deal with the pollution issues from an educational perspective, and how to balance economic globalization and environmental sustainability

    Synchronous MDADT-Based Fuzzy Adaptive Tracking Control for Switched Multiagent Systems via Modified Self-Triggered Mechanism

    Get PDF
    In this paper, a self-triggered fuzzy adaptive switched control strategy is proposed to address the synchronous tracking issue in switched stochastic multiagent systems (MASs) based on mode-dependent average dwell-time (MDADT) method. Firstly, a synchronous slow switching mechanism is considered in switched stochastic MASs and realized through a class of designed switching signals under MDADT property. By utilizing the information of both specific agents under switching dynamics and observers with switching features, the synchronous switching signals are designed, which reduces the design complexity. Then, a switched state observer via a switching-related output mask is proposed. The information of agents and their preserved neighbors is utilized to construct the observer and the observation performance of states is improved. Moreover, a modified self- triggered mechanism is designed to improve control performance via proposing auxiliary function. Finally, by analysing the re- lationship between the synchronous switching problem and the different switching features of the followers, the synchronous slow switching mechanism based on MDADT is obtained. Meanwhile, the designed self-triggered controller can guarantee that all signals of the closed-loop system are ultimately bounded under the switching signals. The effectiveness of the designed control method can be verified by some simulation results

    Exploring the Training Robustness of Distributional Reinforcement Learning against Noisy State Observations

    Full text link
    In real scenarios, state observations that an agent observes may contain measurement errors or adversarial noises, misleading the agent to take suboptimal actions or even collapse while training. In this paper, we study the training robustness of distributional Reinforcement Learning~(RL), a class of state-of-the-art methods that estimate the whole distribution, as opposed to only the expectation, of the total return. Firstly, we validate the contraction of distributional Bellman operators in the State-Noisy Markov Decision Process~(SN-MDP), a typical tabular case that incorporates both random and adversarial state observation noises. In the noisy setting with function approximation, we then analyze the vulnerability of least squared loss in expectation-based RL with either linear or nonlinear function approximation. By contrast, we theoretically characterize the bounded gradient norm of distributional RL loss based on the categorical parameterization equipped with the Kullback-Leibler~(KL) divergence. The resulting stable gradients while the optimization in distributional RL accounts for its better training robustness against state observation noises. Finally, extensive experiments on the suite of environments verified that distributional RL is less vulnerable against both random and adversarial noisy state observations compared with its expectation-based counterpart

    Interpreting Distributional Reinforcement Learning: A Regularization Perspective

    Full text link
    Distributional reinforcement learning~(RL) is a class of state-of-the-art algorithms that estimate the whole distribution of the total return rather than only its expectation. Despite the remarkable performance of distributional RL, a theoretical understanding of its advantages over expectation-based RL remains elusive. In this paper, we attribute the superiority of distributional RL to its regularization effect in terms of the value distribution information regardless of its expectation. Firstly, by leverage of a variant of the gross error model in robust statistics, we decompose the value distribution into its expectation and the remaining distribution part. As such, the extra benefit of distributional RL compared with expectation-based RL is mainly interpreted as the impact of a \textit{risk-sensitive entropy regularization} within the Neural Fitted Z-Iteration framework. Meanwhile, we establish a bridge between the risk-sensitive entropy regularization of distributional RL and the vanilla entropy in maximum entropy RL, focusing specifically on actor-critic algorithms. It reveals that distributional RL induces a corrected reward function and thus promotes a risk-sensitive exploration against the intrinsic uncertainty of the environment. Finally, extensive experiments corroborate the role of the regularization effect of distributional RL and uncover mutual impacts of different entropy regularization. Our research paves a way towards better interpreting the efficacy of distributional RL algorithms, especially through the lens of regularization
    • …
    corecore