372 research outputs found

    Safety Verification of Neural Feedback Systems Based on Constrained Zonotopes

    Full text link
    Artificial neural networks (ANNs) have been utilized in many feedback control systems and introduced new challenges regarding the safety of the system. This paper considers the problem of verifying whether the trajectories of a system with a feedforward neural network (FNN) controller can avoid unsafe regions, using a constrained zonotope-based reachability analysis approach. FNNs with the rectified linear unit activation function are considered in this work. A novel set-based method is proposed to compute both exact and over-approximated reachable sets for linear discrete-time systems with FNN controllers, and linear program-based sufficient conditions are presented to certify the safety of the neural feedback systems. Reachability analysis and safety verification for neural feedback systems with nonlinear models are also considered. The computational efficiency and accuracy of the proposed method are demonstrated by two numerical examples where a comparison with state-of-the-art methods is also provided.Comment: 8 pages, 4 figure

    Reachability Analysis and Safety Verification of Neural Feedback Systems via Hybrid Zonotopes

    Full text link
    Hybrid zonotopes generalize constrained zonotopes by introducing additional binary variables and possess some unique properties that make them convenient to represent nonconvex sets. This paper presents novel hybrid zonotope-based methods for the reachability analysis and safety verification of neural feedback systems. Algorithms are proposed to compute the input-output relationship of each layer of a feedforward neural network, as well as the exact reachable sets of neural feedback systems. In addition, a sufficient and necessary condition is formulated as a mixed-integer linear program to certify whether the trajectories of a neural feedback system can avoid unsafe regions. The proposed approach is shown to yield a formulation that provides the tightest convex relaxation for the reachable sets of the neural feedback system. Complexity reduction techniques for the reachable sets are developed to balance the computation efficiency and approximation accuracy. Two numerical examples demonstrate the superior performance of the proposed approach compared to other existing methods.Comment: 8 pages, 4 figure

    Robust Stability of Neural Feedback Systems with Interval Matrix Uncertainties

    Full text link
    Neural networks have gained popularity in controller design due to their versatility and efficiency, but their integration into feedback systems can compromise stability, especially in the presence of uncertainties. This paper addresses the challenge of certifying robust stability in neural feedback systems with interval matrix uncertainties. By leveraging classic robust stability techniques and the recent quadratic constraint-based method to abstract the input-output relationship imposed by neural networks, we present novel robust stability certificates that are formulated in the form of linear matrix inequalities. Three relaxed sufficient conditions are introduced to mitigate computational complexity. The equivalence of these conditions in terms of feasibility, as well as their connections with existing robust stability results, are also established. The proposed method is demonstrated by two numerical examples

    D2^2: Decentralized Training over Decentralized Data

    Full text link
    While training a machine learning model using multiple workers, each of which collects data from their own data sources, it would be most useful when the data collected from different workers can be {\em unique} and {\em different}. Ironically, recent analysis of decentralized parallel stochastic gradient descent (D-PSGD) relies on the assumption that the data hosted on different workers are {\em not too different}. In this paper, we ask the question: {\em Can we design a decentralized parallel stochastic gradient descent algorithm that is less sensitive to the data variance across workers?} In this paper, we present D2^2, a novel decentralized parallel stochastic gradient descent algorithm designed for large data variance \xr{among workers} (imprecisely, "decentralized" data). The core of D2^2 is a variance blackuction extension of the standard D-PSGD algorithm, which improves the convergence rate from O(σnT+(nζ2)13T2/3)O\left({\sigma \over \sqrt{nT}} + {(n\zeta^2)^{\frac{1}{3}} \over T^{2/3}}\right) to O(σnT)O\left({\sigma \over \sqrt{nT}}\right) where ζ2\zeta^{2} denotes the variance among data on different workers. As a result, D2^2 is robust to data variance among workers. We empirically evaluated D2^2 on image classification tasks where each worker has access to only the data of a limited set of labels, and find that D2^2 significantly outperforms D-PSGD

    Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models

    Full text link
    Large language models (LLMs) have unveiled remarkable reasoning capabilities by exploiting chain-of-thought (CoT) prompting, which generates intermediate reasoning chains to serve as the rationale for deriving the answer. However, current CoT methods either simply employ general prompts such as Let's think step by step, or heavily rely on pre-defined task-specific demonstrations to attain preferable performances, thereby engendering an inescapable gap between performance and generalization. To bridge this gap, we propose GeM-CoT, a Generalizable CoT prompting mechanism in Mixed-task scenarios where the type of input questions is unknown. GeM-CoT first categorizes the question type and subsequently samples or constructs demonstrations from the corresponding data pool in an automatic pattern. With this technical design, GeM-CoT simultaneously enjoys superior generalization capabilities and remarkable performances on 10 public reasoning tasks and 23 BBH tasks.Comment: 17 pages, 12 figure
    • …
    corecore