13,952 research outputs found

    Time and frequency domain analysis of sampled data controllers via mixed operation equations

    Get PDF
    Specification of the mathematical equations required to define the dynamic response of a linear continuous plant, subject to sampled data control, is complicated by the fact that the digital components of the control system cannot be modeled via linear ordinary differential equations. This complication can be overcome by introducing two new mathematical operations; namely, the operation of zero order hold and digial delay. It is shown that by direct utilization of these operations, a set of linear mixed operation equations can be written and used to define the dynamic response characteristics of the controlled system. It also is shown how these linear mixed operation equations lead, in an automatable manner, directly to a set of finite difference equations which are in a format compatible with follow on time and frequency domain analysis methods

    Long-Range Dependence in Financial Markets: a Moving Average Cluster Entropy Approach

    Get PDF
    A perspective is taken on the intangible complexity of economic and social systems by investigating the underlying dynamical processes that produce, store and transmit information in financial time series in terms of the \textit{moving average cluster entropy}. An extensive analysis has evidenced market and horizon dependence of the \textit{moving average cluster entropy} in real world financial assets. The origin of the behavior is scrutinized by applying the \textit{moving average cluster entropy} approach to long-range correlated stochastic processes as the Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Fractional Brownian motion (FBM). To that end, an extensive set of series is generated with a broad range of values of the Hurst exponent HH and of the autoregressive, differencing and moving average parameters p,d,qp,d,q. A systematic relation between \textit{moving average cluster entropy}, \textit{Market Dynamic Index} and long-range correlation parameters HH, dd is observed. This study shows that the characteristic behaviour exhibited by the horizon dependence of the cluster entropy is related to long-range positive correlation in financial markets. Specifically, long range positively correlated ARFIMA processes with differencing parameter d≃0.05 d\simeq 0.05, d≃0.15d\simeq 0.15 and d≃0.25 d\simeq 0.25 are consistent with \textit{moving average cluster entropy} results obtained in time series of DJIA, S\&P500 and NASDAQ

    Preconditioning Markov Chain Monte Carlo Simulations Using Coarse-Scale Models

    Get PDF
    We study the preconditioning of Markov chain Monte Carlo (MCMC) methods using coarse-scale models with applications to subsurface characterization. The purpose of preconditioning is to reduce the fine-scale computational cost and increase the acceptance rate in the MCMC sampling. This goal is achieved by generating Markov chains based on two-stage computations. In the first stage, a new proposal is first tested by the coarse-scale model based on multiscale finite volume methods. The full fine-scale computation will be conducted only if the proposal passes the coarse-scale screening. For more efficient simulations, an approximation of the full fine-scale computation using precomputed multiscale basis functions can also be used. Comparing with the regular MCMC method, the preconditioned MCMC method generates a modified Markov chain by incorporating the coarse-scale information of the problem. The conditions under which the modified Markov chain will converge to the correct posterior distribution are stated in the paper. The validity of these assumptions for our application and the conditions which would guarantee a high acceptance rate are also discussed. We would like to note that coarse-scale models used in the simulations need to be inexpensive but not necessarily very accurate, as our analysis and numerical simulations demonstrate. We present numerical examples for sampling permeability fields using two-point geostatistics. The Karhunen--Loève expansion is used to represent the realizations of the permeability field conditioned to the dynamic data, such as production data, as well as some static data. Our numerical examples show that the acceptance rate can be increased by more than 10 times if MCMC simulations are preconditioned using coarse-scale models

    Making inferences with small numbers of training sets

    Get PDF
    A potential methodological problem with empirical studies that assess project effort prediction system is discussed. Frequently, a hold-out strategy is deployed so that the data set is split into a training and a validation set. Inferences are then made concerning the relative accuracy of the different prediction techniques under examination. This is typically done on very small numbers of sampled training sets. It is shown that such studies can lead to almost random results (particularly where relatively small effects are being studied). To illustrate this problem, two data sets are analysed using a configuration problem for case-based prediction and results generated from 100 training sets. This enables results to be produced with quantified confidence limits. From this it is concluded that in both cases using less than five training sets leads to untrustworthy results, and ideally more than 20 sets should be deployed. Unfortunately, this raises a question over a number of empirical validations of prediction techniques, and so it is suggested that further research is needed as a matter of urgency

    How the instability of ranks under long memory affects large-sample inference

    Full text link
    Under long memory, the limit theorems for normalized sums of random variables typically involve a positive integer called "Hermite rank". There is a different limit for each Hermite rank. From a statistical point of view, however, we argue that a rank other than one is unstable, whereas, a rank equal to one is stable. We provide empirical evidence supporting this argument. This has important consequences. Assuming a higher-order rank when it is not really there usually results in underestimating the order of the fluctuations of the statistic of interest. We illustrate this through various examples involving the sample variance, the empirical processes and the Whittle estimator.Accepted manuscrip

    A review of sample and hold systems and design of a new fractional algorithm

    Get PDF
    UIDB/00066/2020Digital systems require sample and hold (S&H) systems to perform the conversion from analog to digital and vice versa. Besides the standard zero and first order holds, we find in the literature other versions, namely the fractional and exponential order holds, involving parameters that can be tuned to produce a superior performance. This paper reviews the fundamental concepts associated with the S&H and proposes a new fractional version. The systems are modeled both in the time and Laplace domains. The new S&H stemming from fractional calculus generalizes these devices. The different S&H systems are compared in the frequency domain and their relationships visualized by means of hierarchical clustering and multidimensional scaling representations. The novel strategy allows a better understanding of the possibilities and limitations of S&H systems.publishersversionpublishe
    • …
    corecore