11,226 research outputs found

    Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint

    Full text link
    Inspired by ideas taken from the machine learning literature, new regularization techniques have been recently introduced in linear system identification. In particular, all the adopted estimators solve a regularized least squares problem, differing in the nature of the penalty term assigned to the impulse response. Popular choices include atomic and nuclear norms (applied to Hankel matrices) as well as norms induced by the so called stable spline kernels. In this paper, a comparative study of estimators based on these different types of regularizers is reported. Our findings reveal that stable spline kernels outperform approaches based on atomic and nuclear norms since they suitably embed information on impulse response stability and smoothness. This point is illustrated using the Bayesian interpretation of regularization. We also design a new class of regularizers defined by "integral" versions of stable spline/TC kernels. Under quite realistic experimental conditions, the new estimators outperform classical prediction error methods also when the latter are equipped with an oracle for model order selection

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Learning for Advanced Motion Control

    Full text link
    Iterative Learning Control (ILC) can achieve perfect tracking performance for mechatronic systems. The aim of this paper is to present an ILC design tutorial for industrial mechatronic systems. First, a preliminary analysis reveals the potential performance improvement of ILC prior to its actual implementation. Second, a frequency domain approach is presented, where fast learning is achieved through noncausal model inversion, and safe and robust learning is achieved by employing a contraction mapping theorem in conjunction with nonparametric frequency response functions. The approach is demonstrated on a desktop printer. Finally, a detailed analysis of industrial motion systems leads to several shortcomings that obstruct the widespread implementation of ILC algorithms. An overview of recently developed algorithms, including extensions using machine learning algorithms, is outlined that are aimed to facilitate broad industrial deployment.Comment: 8 pages, 15 figures, IEEE 16th International Workshop on Advanced Motion Control, 202

    Sensitivity study of generalised frequency response functions

    Get PDF
    The dependence and independence of input signal amplitudes for Generalised Frequency Response Functions(GFRF’s) are discussed based on parametric modelling

    Stochastic reaction networks with input processes: Analysis and applications to reporter gene systems

    Full text link
    Stochastic reaction network models are widely utilized in biology and chemistry to describe the probabilistic dynamics of biochemical systems in general, and gene interaction networks in particular. Most often, statistical analysis and inference of these systems is addressed by parametric approaches, where the laws governing exogenous input processes, if present, are themselves fixed in advance. Motivated by reporter gene systems, widely utilized in biology to monitor gene activation at the individual cell level, we address the analysis of reaction networks with state-affine reaction rates and arbitrary input processes. We derive a generalization of the so-called moment equations where the dynamics of the network statistics are expressed as a function of the input process statistics. In stationary conditions, we provide a spectral analysis of the system and elaborate on connections with linear filtering. We then apply the theoretical results to develop a method for the reconstruction of input process statistics, namely the gene activation autocovariance function, from reporter gene population snapshot data, and demonstrate its performance on a simulated case study

    Canonical time-frequency, time-scale, and frequency-scale representations of time-varying channels

    Full text link
    Mobile communication channels are often modeled as linear time-varying filters or, equivalently, as time-frequency integral operators with finite support in time and frequency. Such a characterization inherently assumes the signals are narrowband and may not be appropriate for wideband signals. In this paper time-scale characterizations are examined that are useful in wideband time-varying channels, for which a time-scale integral operator is physically justifiable. A review of these time-frequency and time-scale characterizations is presented. Both the time-frequency and time-scale integral operators have a two-dimensional discrete characterization which motivates the design of time-frequency or time-scale rake receivers. These receivers have taps for both time and frequency (or time and scale) shifts of the transmitted signal. A general theory of these characterizations which generates, as specific cases, the discrete time-frequency and time-scale models is presented here. The interpretation of these models, namely, that they can be seen to arise from processing assumptions on the transmit and receive waveforms is discussed. Out of this discussion a third model arises: a frequency-scale continuous channel model with an associated discrete frequency-scale characterization.Comment: To appear in Communications in Information and Systems - special issue in honor of Thomas Kailath's seventieth birthda

    Design and implementation of a multi-modal biometric system for company access control

    Get PDF
    This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive checking with a minimum cooperation from the subject, while iris supports very accurate recognition procedure at a higher grade of invasivity. The recognition of the face trait is based on the Local Binary Patterns histograms, and the Daughman\u2019s method is implemented for the analysis of the iris data. The recognition process may require either the acquisition of the user\u2019s face only or the serial acquisition of both the user\u2019s face and iris, depending on the confidence level of the decision with respect to the set of security levels and requirements, stated in a formal way in the Service Level Agreement at a negotiation phase. The quality of the decision depends on the setting of proper different thresholds in the decision modules for the two biometric traits. Any time the quality of the decision is not good enough, the system activates proper rules, which ask for new acquisitions (and decisions), possibly with different threshold values, resulting in a system not with a fixed and predefined behaviour, but one which complies with the actual acquisition context. Rules are formalized as deduction rules and grouped together to represent \u201cresponse behaviors\u201d according to the previous analysis. Therefore, there are different possible working flows, since the actual response of the recognition process depends on the output of the decision making modules that compose the system. Finally, the deployment phase is described, together with the results from the testing, based on the AT&T Face Database and the UBIRIS database
    • …
    corecore