96 research outputs found

    How retroactivity impacts the robustness of genetic networks

    Get PDF
    This paper studies how retroactivity impacts the robustness of gene transcription networks against parameter perturbations. By employing the linearization technique and the real stability radius, we provide comparisons of the robustness between gene transcription networks with retroactivity and ones without retroactivity. Both numerical and analytical results show that retroactivity tends to decrease such robustness. This finding in turn implies that modular genetic networks tend to be more robust against parameter perturbations.National Science Foundation (U.S.) (NSF-CCF-I058127

    Variable Sampling MPC via Differentiable Time-Warping Function

    Full text link
    Designing control inputs for a system that involves dynamical responses in multiple timescales is nontrivial. This paper proposes a parameterized time-warping function to enable a non-uniformly sampling along a prediction horizon given some parameters. The horizon should capture the responses under faster dynamics in the near future and preview the impact from slower dynamics in the distant future. Then a variable sampling MPC (VS-MPC) strategy is proposed to jointly determine optimal control and sampling parameters at each time instant. VS-MPC adapts how it samples along the horizon and determines optimal control accordingly at each time instant without any manual tuning or trial and error. A numerical example of a wind farm battery energy storage system is also provided to demonstrate that VS-MPC outperforms the uniform sampling MPC

    Distributed Optimization via Kernelized Multi-armed Bandits

    Full text link
    Multi-armed bandit algorithms provide solutions for sequential decision-making where learning takes place by interacting with the environment. In this work, we model a distributed optimization problem as a multi-agent kernelized multi-armed bandit problem with a heterogeneous reward setting. In this setup, the agents collaboratively aim to maximize a global objective function which is an average of local objective functions. The agents can access only bandit feedback (noisy reward) obtained from the associated unknown local function with a small norm in reproducing kernel Hilbert space (RKHS). We present a fully decentralized algorithm, Multi-agent IGP-UCB (MA-IGP-UCB), which achieves a sub-linear regret bound for popular classes for kernels while preserving privacy. It does not necessitate the agents to share their actions, rewards, or estimates of their local function. In the proposed approach, the agents sample their individual local functions in a way that benefits the whole network by utilizing a running consensus to estimate the upper confidence bound on the global function. Furthermore, we propose an extension, Multi-agent Delayed IGP-UCB (MAD-IGP-UCB) algorithm, which reduces the dependence of the regret bound on the number of agents in the network. It provides improved performance by utilizing a delay in the estimation update step at the cost of more communication
    corecore