19,645 research outputs found
Regularized adaptive long autoregressive spectral analysis
This paper is devoted to adaptive long autoregressive spectral analysis when
(i) very few data are available, (ii) information does exist beforehand
concerning the spectral smoothness and time continuity of the analyzed signals.
The contribution is founded on two papers by Kitagawa and Gersch. The first one
deals with spectral smoothness, in the regularization framework, while the
second one is devoted to time continuity, in the Kalman formalism. The present
paper proposes an original synthesis of the two contributions: a new
regularized criterion is introduced that takes both information into account.
The criterion is efficiently optimized by a Kalman smoother. One of the major
features of the method is that it is entirely unsupervised: the problem of
automatically adjusting the hyperparameters that balance data-based versus
prior-based information is solved by maximum likelihood. The improvement is
quantified in the field of meteorological radar
Formal Verification of Probabilistic SystemC Models with Statistical Model Checking
Transaction-level modeling with SystemC has been very successful in
describing the behavior of embedded systems by providing high-level executable
models, in which many of them have inherent probabilistic behaviors, e.g.,
random data and unreliable components. It thus is crucial to have both
quantitative and qualitative analysis of the probabilities of system
properties. Such analysis can be conducted by constructing a formal model of
the system under verification and using Probabilistic Model Checking (PMC).
However, this method is infeasible for large systems, due to the state space
explosion. In this article, we demonstrate the successful use of Statistical
Model Checking (SMC) to carry out such analysis directly from large SystemC
models and allow designers to express a wide range of useful properties. The
first contribution of this work is a framework to verify properties expressed
in Bounded Linear Temporal Logic (BLTL) for SystemC models with both timed and
probabilistic characteristics. Second, the framework allows users to expose a
rich set of user-code primitives as atomic propositions in BLTL. Moreover,
users can define their own fine-grained time resolution rather than the
boundary of clock cycles in the SystemC simulation. The third contribution is
an implementation of a statistical model checker. It contains an automatic
monitor generation for producing execution traces of the
model-under-verification (MUV), the mechanism for automatically instrumenting
the MUV, and the interaction with statistical model checking algorithms.Comment: Journal of Software: Evolution and Process. Wiley, 2017. arXiv admin
note: substantial text overlap with arXiv:1507.0818
Bayesian interpretation of periodograms
The usual nonparametric approach to spectral analysis is revisited within the
regularization framework. Both usual and windowed periodograms are obtained as
the squared modulus of the minimizer of regularized least squares criteria.
Then, particular attention is paid to their interpretation within the Bayesian
statistical framework. Finally, the question of unsupervised hyperparameter and
window selection is addressed. It is shown that maximum likelihood solution is
both formally achievable and practically useful
Using Markov Models and Statistics to Learn, Extract, Fuse, and Detect Patterns in Raw Data
Many systems are partially stochastic in nature. We have derived data driven
approaches for extracting stochastic state machines (Markov models) directly
from observed data. This chapter provides an overview of our approach with
numerous practical applications. We have used this approach for inferring
shipping patterns, exploiting computer system side-channel information, and
detecting botnet activities. For contrast, we include a related data-driven
statistical inferencing approach that detects and localizes radiation sources.Comment: Accepted by 2017 International Symposium on Sensor Networks, Systems
and Securit
Survey of data mining approaches to user modeling for adaptive hypermedia
The ability of an adaptive hypermedia system to create tailored environments depends mainly on the amount and accuracy of information stored in each user model. Some of the difficulties that user modeling faces are the amount of data available to create user models, the adequacy of the data, the noise within that data, and the necessity of capturing the imprecise nature of human behavior. Data mining and machine learning techniques have the ability to handle large amounts of data and to process uncertainty. These characteristics make these techniques suitable for automatic generation of user models that simulate human decision making. This paper surveys different data mining techniques that can be used to efficiently and accurately capture user behavior. The paper also presents guidelines that show which techniques may be used more efficiently according to the task implemented by the applicatio
Video summarization by group scoring
In this paper a new model for user-centered video summarization is presented. Involvement of more than one expert in generating the final video summary should be regarded as the main use case for this algorithm. This approach consists of three major steps. First, the video frames are scored by a group of operators. Next, these assigned scores are averaged to produce a singular value for each frame and lastly, the highest scored video frames alongside the corresponding audio and textual contents are extracted to be inserted into the summary. The effectiveness of this approach has been evaluated by comparing the video summaries generated by this system against the results from a number of automatic summarization tools that use different modalities for abstraction
- …