328 research outputs found

    Markov Parameter Identification via Chebyshev Approximation

    Full text link
    This paper proposes an identification algorithm for Single Input Single Output (SISO) Linear Time-Invariant (LTI) systems. In the noise-free setting, where the first TT Markov parameters can be precisely estimated, all Markov parameters can be inferred by the linear combination of the known TT Markov parameters, of which the coefficients are obtained by solving the uniform polynomial approximation problem, and the upper bound of the asymptotic identification bias is provided. For the finite-time identification scenario, we cast the system identification problem with noisy Markov parameters into a regularized uniform approximation problem. Numerical results demonstrate that the proposed algorithm outperforms the conventional Ho-Kalman Algorithm for the finite-time identification scenario while the asymptotic bias remains negligible.Comment: Accepted by IFAC World Congress (IFAC WC 2023) Conferenc

    Finite Time Performance Analysis of MIMO Systems Identification

    Full text link
    This paper is concerned with the finite time identification performance of an n dimensional discrete-time Multiple-Input Multiple-Output (MIMO) Linear Time-Invariant system, with p inputs and m outputs. We prove that the widely-used Ho-Kalman algorithm and Multivariable Output Error State Space (MOESP) algorithm are ill-conditioned for MIMO system when n/m or n/p is large. Moreover, by analyzing the Cramer-Rao bound, we derive a fundamental limit for identifying the real and stable (or marginally stable) poles of MIMO system and prove that the sample complexity for any unbiased pole estimation algorithm to reach a certain level of accuracy explodes superpolynomially with respect to n/(pm). Numerical results are provided to illustrate the ill-conditionedness of Ho-Kalman algorithm and MOESP algorithm as well as the fundamental limit on identification.Comment: 9 pages, 4 figure

    Linear Model Predictive Control under Continuous Path Constraints via Parallelized Primal-Dual Hybrid Gradient Algorithm

    Full text link
    In this paper, we consider a Model Predictive Control(MPC) problem of a continuous time linear time-invariant system under continuous time path constraints on the states and the inputs. By leveraging the concept of differential flatness, we can replace the differential equations governing the system with linear mapping between the states, inputs and the flat outputs (and their derivatives). The flat output is then parameterized by piecewise polynomials and the model predictive control problem can be equivalently transformed into an Semi-Definite Programming (SDP) problem via Sum-of-Squares with guaranteed constraint satisfaction at every continuous time instant. We further observe that the SDP problem contains a large number of small-size semi-definite matrices as optimization variables, and thus a Primal-Dual Hybrid Gradient (PDHF) algorithm, which can be efficiently parallelized, is developed to accelerate the optimization procedure. Simulation on a quadruple-tank process illustrates that our formulation can guarantee strict constraint satisfaction, while the standard MPC controller based on discretized system may violate the constraint in between a sampling period. On the other hand, we should that the our parallelized PDHG algorithm can outperform commercial solvers for problems with long planning horizon

    Transforming Graphs for Enhanced Attribute Clustering: An Innovative Graph Transformer-Based Method

    Full text link
    Graph Representation Learning (GRL) is an influential methodology, enabling a more profound understanding of graph-structured data and aiding graph clustering, a critical task across various domains. The recent incursion of attention mechanisms, originally an artifact of Natural Language Processing (NLP), into the realm of graph learning has spearheaded a notable shift in research trends. Consequently, Graph Attention Networks (GATs) and Graph Attention Auto-Encoders have emerged as preferred tools for graph clustering tasks. Yet, these methods primarily employ a local attention mechanism, thereby curbing their capacity to apprehend the intricate global dependencies between nodes within graphs. Addressing these impediments, this study introduces an innovative method known as the Graph Transformer Auto-Encoder for Graph Clustering (GTAGC). By melding the Graph Auto-Encoder with the Graph Transformer, GTAGC is adept at capturing global dependencies between nodes. This integration amplifies the graph representation and surmounts the constraints posed by the local attention mechanism. The architecture of GTAGC encompasses graph embedding, integration of the Graph Transformer within the autoencoder structure, and a clustering component. It strategically alternates between graph embedding and clustering, thereby tailoring the Graph Transformer for clustering tasks, whilst preserving the graph's global structural information. Through extensive experimentation on diverse benchmark datasets, GTAGC has exhibited superior performance against existing state-of-the-art graph clustering methodologies

    Consecutive Inertia Drift of Autonomous RC Car via Primitive-based Planning and Data-driven Control

    Full text link
    Inertia drift is an aggressive transitional driving maneuver, which is challenging due to the high nonlinearity of the system and the stringent requirement on control and planning performance. This paper presents a solution for the consecutive inertia drift of an autonomous RC car based on primitive-based planning and data-driven control. The planner generates complex paths via the concatenation of path segments called primitives, and the controller eases the burden on feedback by interpolating between multiple real trajectories with different initial conditions into one near-feasible reference trajectory. The proposed strategy is capable of drifting through various paths containing consecutive turns, which is validated in both simulation and reality.Comment: 9 pages, 10 figures, to appear to IROS 202

    Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks

    Full text link
    The vulnerability of deep neural networks (DNNs) to adversarial examples is well documented. Under the strong white-box threat model, where attackers have full access to DNN internals, recent work has produced continual advancements in defenses, often followed by more powerful attacks that break them. Meanwhile, research on the more realistic black-box threat model has focused almost entirely on reducing the query-cost of attacks, making them increasingly practical for ML models already deployed today. This paper proposes and evaluates Blacklight, a new defense against black-box adversarial attacks. Blacklight targets a key property of black-box attacks: to compute adversarial examples, they produce sequences of highly similar images while trying to minimize the distance from some initial benign input. To detect an attack, Blacklight computes for each query image a compact set of one-way hash values that form a probabilistic fingerprint. Variants of an image produce nearly identical fingerprints, and fingerprint generation is robust against manipulation. We evaluate Blacklight on 5 state-of-the-art black-box attacks, across a variety of models and classification tasks. While the most efficient attacks take thousands or tens of thousands of queries to complete, Blacklight identifies them all, often after only a handful of queries. Blacklight is also robust against several powerful countermeasures, including an optimal black-box attack that approximates white-box attacks in efficiency. Finally, Blacklight significantly outperforms the only known alternative in both detection coverage of attack queries and resistance against persistent attackers
    • …
    corecore