597 research outputs found

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE

    Identification of Nonlinear Systems Using the Hammerstein-Wiener Model with Improved Orthogonal Functions

    Get PDF
    Hammerstein-Wiener systems present a structure consisting of three serial cascade blocks. Two are static nonlinearities, which can be described with nonlinear functions. The third block represents a linear dynamic component placed between the first two blocks. Some of the common linear model structures include a rational-type transfer function, orthogonal rational functions (ORF), finite impulse response (FIR), autoregressive with extra input (ARX), autoregressive moving average with exogenous inputs model (ARMAX), and output-error (O-E) model structure. This paper presents a new structure, and a new improvement is proposed, which is consisted of the basic structure of Hammerstein-Wiener models with an improved orthogonal function of MĂĽntz-Legendre type. We present an extension of generalised Malmquist polynomials that represent MĂĽntz polynomials. Also, a detailed mathematical background for performing improved almost orthogonal polynomials, in combination with Hammerstein-Wiener models, is proposed. The proposed approach is used to identify the strongly nonlinear hydraulic system via the transfer function. To compare the results obtained, well-known orthogonal functions of the Legendre, Chebyshev, and Laguerre types are exploited

    Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs

    Full text link
    The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.Comment: 27 pages, LaTeX; corrected typos in Appendix equations A.10 and A.1

    Large Scale Constrained Trajectory Optimization Using Indirect Methods

    Get PDF
    State-of-the-art direct and indirect methods face significant challenges when solving large scale constrained trajectory optimization problems. Two main challenges when using indirect methods to solve such problems are difficulties in handling path inequality constraints, and the exponential increase in computation time as the number of states and constraints in problem increases. The latter challenge affects both direct and indirect methods. A methodology called the Integrated Control Regularization Method (ICRM) is developed for incorporating path constraints into optimal control problems when using indirect methods. ICRM removes the need for multiple constrained and unconstrained arcs and converts constrained optimal control problems into two-point boundary value problems. Furthermore, it also addresses the issue of transcendental control law equations by re-formulating the problem so that it can be solved by existing numerical solvers for two-point boundary value problems (TPBVP). The capabilities of ICRM are demonstrated by using it to solve some representative constrained trajectory optimization problems as well as a five vehicle problem with path constraints. Regularizing path constraints using ICRM represents a first step towards obtaining high quality solutions for highly constrained trajectory optimization problems which would generally be considered practically impossible to solve using indirect or direct methods. The Quasilinear Chebyshev Picard Iteration (QCPI) method builds on prior work and uses Chebyshev Polynomial series and the Picard Iteration combined with the Modified Quasi-linearization Algorithm. The method is developed specifically to utilize parallel computational resources for solving large TPBVPs. The capabilities of the numerical method are validated by solving some representative nonlinear optimal control problems. The performance of QCPI is benchmarked against single shooting and parallel shooting methods using a multi-vehicle optimal control problem. The results demonstrate that QCPI is capable of leveraging parallel computing architectures and can greatly benefit from implementation on highly parallel architectures such as GPUs. The capabilities of ICRM and QCPI are explored further using a five-vehicle constrained optimal control problem. The scenario models a co-operative, simultaneous engagement of two targets by five vehicles. The problem involves 3DOF dynamic models, control constraints for each vehicle and a no-fly zone path constraint. Trade studies are conducted by varying different parameters in the problem to demonstrate smooth transition between constrained and unconstrained arcs. Such transitions would be highly impractical to study using existing indirect methods. The study serves as a demonstration of the capabilities of ICRM and QCPI for solving large-scale trajectory optimization methods. An open source, indirect trajectory optimization framework is developed with the goal of being a viable contender to state-of-the-art direct solvers such as GPOPS and DIDO. The framework, named beluga, leverages ICRM and QCPI along with traditional indirect optimal control theory. In its current form, as illustrated by the various examples in this dissertation, it has made significant advances in automating the use of indirect methods for trajectory optimization. Following on the path of popular and widely used scientific software projects such as SciPy [1] and Numpy [2], beluga is released under the permissive MIT license [3]. Being an open source project allows the community to contribute freely to the framework, further expanding its capabilities and allow faster integration of new advances to the state-of-the-art

    Spectrum analysis of LTI continuous-time systems with constant delays: A literature overview of some recent results

    Get PDF
    In recent decades, increasingly intensive research attention has been given to dynamical systems containing delays and those affected by the after-effect phenomenon. Such research covers a wide range of human activities and the solutions of related engineering problems often require interdisciplinary cooperation. The knowledge of the spectrum of these so-called time-delay systems (TDSs) is very crucial for the analysis of their dynamical properties, especially stability, periodicity, and dumping effect. A great volume of mathematical methods and techniques to analyze the spectrum of the TDSs have been developed and further applied in the most recent times. Although a broad family of nonlinear, stochastic, sampled-data, time-variant or time-varying-delay systems has been considered, the study of the most fundamental continuous linear time-invariant (LTI) TDSs with fixed delays is still the dominant research direction with ever-increasing new results and novel applications. This paper is primarily aimed at a (systematic) literature overview of recent (mostly published between 2013 to 2017) advances regarding the spectrum analysis of the LTI-TDSs. Specifically, a total of 137 collected articles-which are most closely related to the research area-are eventually reviewed. There are two main objectives of this review paper: First, to provide the reader with a detailed literature survey on the selected recent results on the topic and Second, to suggest possible future research directions to be tackled by scientists and engineers in the field. © 2013 IEEE.MSMT-7778/2014, FEDER, European Regional Development Fund; LO1303, FEDER, European Regional Development Fund; CZ.1.05/2.1.00/19.0376, FEDER, European Regional Development FundEuropean Regional Development Fund through the Project CEBIA-Tech Instrumentation [CZ.1.05/2.1.00/19.0376]; National Sustainability Program Project [LO1303 (MSMT-7778/2014)
    • …
    corecore