3,196 research outputs found

    Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables

    Full text link
    Understanding nonlinear dynamical systems (NLDSs) is challenging in a variety of engineering and scientific fields. Dynamic mode decomposition (DMD), which is a numerical algorithm for the spectral analysis of Koopman operators, has been attracting attention as a way of obtaining global modal descriptions of NLDSs without requiring explicit prior knowledge. However, since existing DMD algorithms are in principle formulated based on the concatenation of scalar observables, it is not directly applicable to data with dependent structures among observables, which take, for example, the form of a sequence of graphs. In this paper, we formulate Koopman spectral analysis for NLDSs with structures among observables and propose an estimation algorithm for this problem. This method can extract and visualize the underlying low-dimensional global dynamics of NLDSs with structures among observables from data, which can be useful in understanding the underlying dynamics of such NLDSs. To this end, we first formulate the problem of estimating spectra of the Koopman operator defined in vector-valued reproducing kernel Hilbert spaces, and then develop an estimation procedure for this problem by reformulating tensor-based DMD. As a special case of our method, we propose the method named as Graph DMD, which is a numerical algorithm for Koopman spectral analysis of graph dynamical systems, using a sequence of adjacency matrices. We investigate the empirical performance of our method by using synthetic and real-world data.Comment: 34 pages with 4 figures, Published in Neural Networks, 201

    LMI based Stability and Stabilization of Second-order Linear Repetitive Processes

    No full text
    This paper develops new results on the stability and control of a class of linear repetitive processes described by a second-order matrix discrete or differential equation. These are developed by transformation of the secondorder dynamics to those of an equivalent first-order descriptor state-space model, thus avoiding the need to invert a possibly ill-conditioned leading coefficient matrix in the original model

    Towards Developing a Travel Time Forecasting Model for Location-Based Services: a Review

    Get PDF
    Travel time forecasting models have been studied intensively as a subject of Intelligent Transportation Systems (ITS), particularly in the topics of advanced traffic management systems (ATMS), advanced traveler information systems (ATIS), and commercial vehicle operations (CVO). While the concept of travel time forecasting is relatively simple, it involves a notably complicated task of implementing even a simple model. Thus, existing forecasting models are diverse in their original formulations, including mathematical optimizations, computer simulations, statistics, and artificial intelligence. A comprehensive literature review, therefore, would assist in formulating a more reliable travel time forecasting model. On the other hand, geographic information systems (GIS) technologies primarily provide the capability of spatial and network database management, as well as technology management. Thus, GIS could support travel time forecasting in various ways by providing useful functions to both the managers in transportation management and information centers (TMICs) and the external users. Thus, in developing a travel time forecasting model, GIS could play important roles in the management of real-time and historical traffic data, the integration of multiple subsystems, and the assistance of information management. The purpose of this paper is to review various models and technologies that have been used for developing a travel time forecasting model with geographic information systems (GIS) technologies. Reviewed forecasting models in this paper include historical profile approaches, time series models, nonparametric regression models, traffic simulations, dynamic traffic assignment models, and neural networks. The potential roles and functions of GIS in travel time forecasting are also discussed.

    Land-Cover and Land-Use Study Using Genetic Algorithms, Petri Nets, and Cellular Automata

    Get PDF
    Recent research techniques, such as genetic algorithm (GA), Petri net (PN), and cellular automata (CA) have been applied in a number of studies. However, their capability and performance in land-cover land-use (LCLU) classification, change detection, and predictive modeling have not been well understood. This study seeks to address the following questions: 1) How do genetic parameters impact the accuracy of GA-based LCLU classification; 2) How do image parameters impact the accuracy of GA-based LCLU classification; 3) Is GA-based LCLU classification more accurate than the maximum likelihood classifier (MLC), iterative self-organizing data analysis technique (ISODATA), and the hybrid approach; 4) How do genetic parameters impact Petri Net-based LCLU change detection; and 5) How do cellular automata components impact the accuracy of LCLU predictive modeling. The study area, namely the Tickfaw River watershed (711mi²), is located in southeast Louisiana and southwest Mississippi. The major datasets include time-series Landsat TM / ETM images and Digital Orthophoto Quarter Quadrangles (DOQQ’s). LCLU classification was conducted by using the GA, MLC, ISODATA, and Hybrid approach. The LCLU change was modeled by using genetic PN-based process mining technique. The process models were interpreted and input to a CA for predicting future LCLU. The major findings include: 1) GA-based LCLU classification is more accurate than the traditional approaches; 2) When genetic parameters, image parameters, or CA components are configured improperly, the accuracy of LCLU classification, the coverage of LCLU change process model, and/or the accuracy of LCLU predictive modeling will be low; 3) For GA-based LCLU classification, the recommended configuration of genetic / image parameters is generation 2000-5000, population 1000, crossover rate 69%-99%, mutation rate 0.1%-0.5%, generation gap 25%-50%, data layers 16-20, training / testing data size 10000-20000 / 5000-10000, and spatial resolution 30m-60m; 4) For genetic Petri nets-based LCLU change detection, the recommended configuration of genetic parameters is generation 500, population 300, crossover rate 59%, mutation rate 5%, and elitism rate 4%; and 5) For CA-based LCLU predictive modeling, the recommended configuration of CA components is space 6025 * 12993, state 2, von Neumann neighborhood 3 * 3, time step 2-3 years, and optimized transition rules

    Data-driven action-value functions for evaluating players in professional team sports

    Get PDF
    As more and larger event stream datasets for professional sports become available, there is growing interest in modeling the complex play dynamics to evaluate player performance. Among these models, a common player evaluation method is assigning values to player actions. Traditional action-values metrics, however, consider very limited game context and player information. Furthermore, they provide directly related to goals (e.g., shots), not all actions. Recent work has shown that reinforcement learning provided powerful methods for addressing quantifying the value of player actions in sports. This dissertation develops deep reinforcement learning (DRL) methods for estimating action values in sports. We make several contributions to DRL for sports. First, we develop neural network architectures that learn an action-value Q-function from sports events logs to estimate each team\u27s expected success given the current match context. Specifically, our architecture models the game history with a recurrent network and predicts the probability that a team scores the next goal. From the learned Q-values, we derive a Goal Impact Metric (GIM) for evaluating a player\u27s performance over a game season. We show that the resulting player rankings are consistent with standard player metrics and temporally consistent within and across seasons. Second, we address the interpretability of the learned Q-values. While neural networks provided accurate estimates, the black-box structure prohibits understanding the influence of different game features on the action values. To interpret the Q-function and understand the influence of game features on action values, we design an interpretable mimic learning framework for the DRL. The framework is based on a Linear Model U-Tree (LMUT) as a transparent mimic model, which facilitates extracting the function rules and computing the feature importance for action values. Third, we incorporate information about specific players into the action values, by introducing a deep player representation framework. In this framework, each player is assigned a latent feature vector called an embedding, with the property that statistically similar players are mapped to nearby embeddings. To compute embeddings that summarize the statistical information about players, we implement a Variational Recurrent Ladder Agent Encoder (VaRLAE) to learn a contextualized representation for when and how players are likely to act. We learn and evaluate deep Q-functions from event data for both ice hockey and soccer. These are challenging continuous-flow games where game context and medium-term consequences are crucial for properly assessing the impact of a player\u27s actions
    corecore