13,138 research outputs found

    Learning Adaptive Discriminative Correlation Filters via Temporal Consistency Preserving Spatial Feature Selection for Robust Visual Tracking

    Get PDF
    With efficient appearance learning models, Discriminative Correlation Filter (DCF) has been proven to be very successful in recent video object tracking benchmarks and competitions. However, the existing DCF paradigm suffers from two major issues, i.e., spatial boundary effect and temporal filter degradation. To mitigate these challenges, we propose a new DCF-based tracking method. The key innovations of the proposed method include adaptive spatial feature selection and temporal consistent constraints, with which the new tracker enables joint spatial-temporal filter learning in a lower dimensional discriminative manifold. More specifically, we apply structured spatial sparsity constraints to multi-channel filers. Consequently, the process of learning spatial filters can be approximated by the lasso regularisation. To encourage temporal consistency, the filter model is restricted to lie around its historical value and updated locally to preserve the global structure in the manifold. Last, a unified optimisation framework is proposed to jointly select temporal consistency preserving spatial features and learn discriminative filters with the augmented Lagrangian method. Qualitative and quantitative evaluations have been conducted on a number of well-known benchmarking datasets such as OTB2013, OTB50, OTB100, Temple-Colour, UAV123 and VOT2018. The experimental results demonstrate the superiority of the proposed method over the state-of-the-art approaches

    Effects of Multirate Systems on the Statistical Properties of Random Signals

    Get PDF
    In multirate digital signal processing, we often encounter time-varying linear systems such as decimators, interpolators, and modulators. In many applications, these building blocks are interconnected with linear filters to form more complicated systems. It is often necessary to understand the way in which the statistical behavior of a signal changes as it passes through such systems. While some issues in this context have an obvious answer, the analysis becomes more involved with complicated interconnections. For example, consider this question: if we pass a cyclostationary signal with period K through a fractional sampling rate-changing device (implemented with an interpolator, a nonideal low-pass filter and a decimator), what can we say about the statistical properties of the output? How does the behavior change if the filter is replaced by an ideal low-pass filter? In this paper, we answer questions of this nature. As an application, we consider a new adaptive filtering structure, which is well suited for the identification of band-limited channels. This structure exploits the band-limited nature of the channel, and embeds the adaptive filter into a multirate system. The advantages are that the adaptive filter has a smaller length, and the adaptation as well as the filtering are performed at a lower rate. Using the theory developed in this paper, we show that a matrix adaptive filter (dimension determined by the decimator and interpolator) gives better performance in terms of lower error energy at convergence than a traditional adaptive filter. Even though matrix adaptive filters are, in general, computationally more expensive, they offer a performance bound that can be used as a yardstick to judge more practical "scalar multirate adaptation" schemes

    Maximum-likelihood estimation of delta-domain model parameters from noisy output signals

    Get PDF
    Fast sampling is desirable to describe signal transmission through wide-bandwidth systems. The delta-operator provides an ideal discrete-time modeling description for such fast-sampled systems. However, the estimation of delta-domain model parameters is usually biased by directly applying the delta-transformations to a sampled signal corrupted by additive measurement noise. This problem is solved here by expectation-maximization, where the delta-transformations of the true signal are estimated and then used to obtain the model parameters. The method is demonstrated on a numerical example to improve on the accuracy of using a shift operator approach when the sample rate is fast

    A kepstrum approach to filtering, smoothing and prediction

    Get PDF
    The kepstrum (or complex cepstrum) method is revisited and applied to the problem of spectral factorization where the spectrum is directly estimated from observations. The solution to this problem in turn leads to a new approach to optimal filtering, smoothing and prediction using the Wiener theory. Unlike previous approaches to adaptive and self-tuning filtering, the technique, when implemented, does not require a priori information on the type or order of the signal generating model. And unlike other approaches - with the exception of spectral subtraction - no state-space or polynomial model is necessary. In this first paper results are restricted to stationary signal and additive white noise

    Bibliographic Review on Distributed Kalman Filtering

    Get PDF
    In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area

    Sparse Iterative Learning Control with Application to a Wafer Stage: Achieving Performance, Resource Efficiency, and Task Flexibility

    Get PDF
    Trial-varying disturbances are a key concern in Iterative Learning Control (ILC) and may lead to inefficient and expensive implementations and severe performance deterioration. The aim of this paper is to develop a general framework for optimization-based ILC that allows for enforcing additional structure, including sparsity. The proposed method enforces sparsity in a generalized setting through convex relaxations using ℓ1\ell_1 norms. The proposed ILC framework is applied to the optimization of sampling sequences for resource efficient implementation, trial-varying disturbance attenuation, and basis function selection. The framework has a large potential in control applications such as mechatronics, as is confirmed through an application on a wafer stage.Comment: 12 pages, 14 figure
    • 

    corecore