1,817,164 research outputs found
Estimation and prediction of road traffic flow using particle filter for real-time traffic control
Real-data testing results of a real-time state estimator and predictor are presented with particular focus on the feature of enabling of detector fault alarms and also its relation to queue-length based traffic control. A parameter and state estimator/predictor is developed by using particle filter. The simulation testing results are quite satisfactory and promising for further work on developing a hybrid model of traffic flow that captures the transition between low and high intensity. By using this hybrid model, it may be more feasible to achieve the significant feature of automatic adaptation to changing system condition
Testing times: on model-driven test generation for non-deterministic real-time systems
Summary form only given. Although testing has always been the most important technique for the validation of software systems it has only become a topic of serious academic research in the past decade or so. In this period research on the use of formal methods for model-driven test generation and execution of functional test cases has led to a number of promising methods and tools for systematic black-box testing of systems, examples are based on A. Belinfante et al. (1999), J. Tretmans and E. Brinksma (2003), J.-C. Fernandez et al. (1996) and J.-C. Fernandez et al. (1997). Most of these approaches are limited to the qualitative behaviour of systems, and exclude quantitative aspects such as real-time properties. The explosive growth of embedded software, however, has also caused a growing need to extend existing testing theories to the testing of real-time reactive systems. In our presentation we present an extension of Tretmans' ioco theory for test generation as stated in J. Tretmans (1996) for input/output transition systems that includes real-time behaviour
Stochastic simulation framework for the Limit Order Book using liquidity motivated agents
In this paper we develop a new form of agent-based model for limit order
books based on heterogeneous trading agents, whose motivations are liquidity
driven. These agents are abstractions of real market participants, expressed in
a stochastic model framework. We develop an efficient way to perform
statistical calibration of the model parameters on Level 2 limit order book
data from Chi-X, based on a combination of indirect inference and
multi-objective optimisation. We then demonstrate how such an agent-based
modelling framework can be of use in testing exchange regulations, as well as
informing brokerage decisions and other trading based scenarios
A Framework for Robust Assimilation of Potentially Malign Third-Party Data, and its Statistical Meaning
This paper presents a model-based method for fusing data from multiple
sensors with a hypothesis-test-based component for rejecting potentially faulty
or otherwise malign data. Our framework is based on an extension of the classic
particle filter algorithm for real-time state estimation of uncertain systems
with nonlinear dynamics with partial and noisy observations. This extension,
based on classical statistical theories, utilizes statistical tests against the
system's observation model. We discuss the application of the two major
statistical testing frameworks, Fisherian significance testing and
Neyman-Pearsonian hypothesis testing, to the Monte Carlo and sensor fusion
settings. The Monte Carlo Neyman-Pearson test we develop is useful when one has
a reliable model of faulty data, while the Fisher one is applicable when one
may not have a model of faults, which may occur when dealing with third-party
data, like GNSS data of transportation system users. These statistical tests
can be combined with a particle filter to obtain a Monte Carlo state estimation
scheme that is robust to faulty or outlier data. We present a synthetic freeway
traffic state estimation problem where the filters are able to reject simulated
faulty GNSS measurements. The fault-model-free Fisher filter, while
underperforming the Neyman-Pearson one when the latter has an accurate fault
model, outperforms it when the assumed fault model is incorrect.Comment: IEEE Intelligent Transportation Systems Magazine, special issue on
GNSS-based positionin
Multiple testing correction in linear mixed models.
BackgroundMultiple hypothesis testing is a major issue in genome-wide association studies (GWAS), which often analyze millions of markers. The permutation test is considered to be the gold standard in multiple testing correction as it accurately takes into account the correlation structure of the genome. Recently, the linear mixed model (LMM) has become the standard practice in GWAS, addressing issues of population structure and insufficient power. However, none of the current multiple testing approaches are applicable to LMM.ResultsWe were able to estimate per-marker thresholds as accurately as the gold standard approach in real and simulated datasets, while reducing the time required from months to hours. We applied our approach to mouse, yeast, and human datasets to demonstrate the accuracy and efficiency of our approach.ConclusionsWe provide an efficient and accurate multiple testing correction approach for linear mixed models. We further provide an intuition about the relationships between per-marker threshold, genetic relatedness, and heritability, based on our observations in real data
- …