32,824 research outputs found
Formal Abstraction of General Stochastic Systems via Noise Partitioning
Verifying the performance of safety-critical, stochastic systems with complex
noise distributions is difficult. We introduce a general procedure for the
finite abstraction of nonlinear stochastic systems with non-standard (e.g.,
non-affine, non-symmetric, non-unimodal) noise distributions for verification
purposes. The method uses a finite partitioning of the noise domain to
construct an interval Markov chain (IMC) abstraction of the system via
transition probability intervals. Noise partitioning allows for a general class
of distributions and structures, including multiplicative and mixture models,
and admits both known and data-driven systems. The partitions required for
optimal transition bounds are specified for systems that are monotonic with
respect to the noise, and explicit partitions are provided for affine and
multiplicative structures. By the soundness of the abstraction procedure,
verification on the IMC provides guarantees on the stochastic system against a
temporal logic specification. In addition, we present a novel refinement-free
algorithm that improves the verification results. Case studies on linear and
nonlinear systems with non-Gaussian noise, including a data-driven example,
demonstrate the generality and effectiveness of the method without introducing
excessive conservatism.Comment: 6 pages, 6 figures, submitted jointly to IEEE Control Systems Letters
and 2024 AC
Statistical Validation of Mutual Information Calculations: Comparison of Alternative Numerical Algorithms
Given two time series X and Y, their mutual information, I(X, Y)= I(Y, X), is the average number of bits of X that can be predicted by measuring Y and vice versa. In the analysis of observational data, calculation of mutual information occurs in three contexts: identification of nonlinear correlation, determination of an optimal sampling interval, particularly when embedding data, and in the investigation of causal relationships with directed mutual information. In this contribution a minimum description length argument is used to determine the optimal number of elements to use when characterizing the distributions of X and Y. However, even when using partitions of the X and Y axis indicated by minimum description length, mutual information calculations performed with a uniform partition of the XY plane can give misleading results. This motivated the construction of an algorithm for calculating mutual information that uses an adaptive partition. This algorithm also incorporates an explicit test of the statistical independence of X and Y in a calculation that returns an assessment of the corresponding null hypothesis. The previously published Fraser-Swinney algorithm for calculating mutual information includes a sophisticated procedure for local adaptive control of the partitioning process. When the Fraser and Swinney algorithm and the algorithm constructed here are compared, they give very similar numerical results (less than 4% difference in a typical application). Detailed comparisons are possible when X and Y are correlated jointly Gaussian distributed because an analytic expression for I(X, Y) can be derived for that case. Based on these tests, three conclusions can be drawn. First, the algorithm constructed here has an advantage over the Fraser-Swinney algorithm in providing an explicit calculation of the probability of the null hypothesis that X and Y are independent. Second, the Fraser-Swinney algorithm is marginally the more accurate of the two algorithms when large data sets are used. With smaller data sets, however, the Fraser-Swinney algorithm reports structures that disappear when more data are available. Third, the algorithm constructed here requires about 0.5% of the computation time required by the Fraser-Swinney algorithm
Online Bin Covering: Expectations vs. Guarantees
Bin covering is a dual version of classic bin packing. Thus, the goal is to
cover as many bins as possible, where covering a bin means packing items of
total size at least one in the bin.
For online bin covering, competitive analysis fails to distinguish between
most algorithms of interest; all "reasonable" algorithms have a competitive
ratio of 1/2. Thus, in order to get a better understanding of the combinatorial
difficulties in solving this problem, we turn to other performance measures,
namely relative worst order, random order, and max/max analysis, as well as
analyzing input with restricted or uniformly distributed item sizes. In this
way, our study also supplements the ongoing systematic studies of the relative
strengths of various performance measures.
Two classic algorithms for online bin packing that have natural dual versions
are Harmonic and Next-Fit. Even though the algorithms are quite different in
nature, the dual versions are not separated by competitive analysis. We make
the case that when guarantees are needed, even under restricted input
sequences, dual Harmonic is preferable. In addition, we establish quite robust
theoretical results showing that if items come from a uniform distribution or
even if just the ordering of items is uniformly random, then dual Next-Fit is
the right choice.Comment: IMADA-preprint-c
- …