12,316 research outputs found
Importance Sampling for Objetive Funtion Estimations in Neural Detector Traing Driven by Genetic Algorithms
To train Neural Networks (NNs) in a supervised way, estimations of an objective function must be carried out. The value of this function decreases as the training progresses and so, the number of test observations necessary for an accurate estimation has to be increased. Consequently, the training computational cost is unaffordable for very low objective function value estimations, and the use of Importance Sampling (IS) techniques becomes convenient. The study of three different objective functions is considered, which implies the proposal of estimators of the objective function using IS techniques: the Mean-Square error, the Cross Entropy error and the Misclassification error criteria. The values of these functions are estimated by IS techniques, and the results are used to train NNs by the application of Genetic Algorithms. Results for a binary detection in Gaussian noise are provided. These results show the evolution of the parameters during the training and the performances of the proposed detectors in terms of error probability and Receiver Operating Characteristics curves. At the end of the study, the obtained results justify the convenience of using IS in the training
Splitting for Rare Event Simulation: A Large Deviation Approach to Design and Analysis
Particle splitting methods are considered for the estimation of rare events.
The probability of interest is that a Markov process first enters a set
before another set , and it is assumed that this probability satisfies a
large deviation scaling. A notion of subsolution is defined for the related
calculus of variations problem, and two main results are proved under mild
conditions. The first is that the number of particles generated by the
algorithm grows subexponentially if and only if a certain scalar multiple of
the importance function is a subsolution. The second is that, under the same
condition, the variance of the algorithm is characterized (asymptotically) in
terms of the subsolution. The design of asymptotically optimal schemes is
discussed, and numerical examples are presented.Comment: Submitted to Stochastic Processes and their Application
Fast performance estimation of block codes
Importance sampling is used in this paper to address the classical yet important problem of performance estimation of block codes. Simulation distributions that comprise discreteand continuous-mixture probability densities are motivated and used for this application. These mixtures are employed in concert with the so-called g-method, which is a conditional importance sampling technique that more effectively exploits knowledge of underlying input distributions. For performance estimation, the emphasis is on bit by bit maximum a-posteriori probability decoding, but message passing algorithms for certain codes have also been investigated. Considered here are single parity check codes, multidimensional product codes, and briefly, low-density parity-check codes. Several error rate results are presented for these various codes, together with performances of the simulation techniques
An Efficient Uplink Multi-Connectivity Scheme for 5G mmWave Control Plane Applications
The millimeter wave (mmWave) frequencies offer the potential of orders of
magnitude increases in capacity for next-generation cellular systems. However,
links in mmWave networks are susceptible to blockage and may suffer from rapid
variations in quality. Connectivity to multiple cells - at mmWave and/or
traditional frequencies - is considered essential for robust communication. One
of the challenges in supporting multi-connectivity in mmWaves is the
requirement for the network to track the direction of each link in addition to
its power and timing. To address this challenge, we implement a novel uplink
measurement system that, with the joint help of a local coordinator operating
in the legacy band, guarantees continuous monitoring of the channel propagation
conditions and allows for the design of efficient control plane applications,
including handover, beam tracking and initial access. We show that an
uplink-based multi-connectivity approach enables less consuming, better
performing, faster and more stable cell selection and scheduling decisions with
respect to a traditional downlink-based standalone scheme. Moreover, we argue
that the presented framework guarantees (i) efficient tracking of the user in
the presence of the channel dynamics expected at mmWaves, and (ii) fast
reaction to situations in which the primary propagation path is blocked or not
available.Comment: Submitted for publication in IEEE Transactions on Wireless
Communications (TWC
Stochastic Model for Power Grid Dynamics
We introduce a stochastic model that describes the quasi-static dynamics of
an electric transmission network under perturbations introduced by random load
fluctuations, random removing of system components from service, random repair
times for the failed components, and random response times to implement optimal
system corrections for removing line overloads in a damaged or stressed
transmission network. We use a linear approximation to the network flow
equations and apply linear programming techniques that optimize the dispatching
of generators and loads in order to eliminate the network overloads associated
with a damaged system. We also provide a simple model for the operator's
response to various contingency events that is not always optimal due to either
failure of the state estimation system or due to the incorrect subjective
assessment of the severity associated with these events. This further allows us
to use a game theoretic framework for casting the optimization of the
operator's response into the choice of the optimal strategy which minimizes the
operating cost. We use a simple strategy space which is the degree of tolerance
to line overloads and which is an automatic control (optimization) parameter
that can be adjusted to trade off automatic load shed without propagating
cascades versus reduced load shed and an increased risk of propagating
cascades. The tolerance parameter is chosen to describes a smooth transition
from a risk averse to a risk taken strategy...Comment: framework for a system-level analysis of the power grid from the
viewpoint of complex network
A Hierarchy of Scheduler Classes for Stochastic Automata
Stochastic automata are a formal compositional model for concurrent
stochastic timed systems, with general distributions and non-deterministic
choices. Measures of interest are defined over schedulers that resolve the
nondeterminism. In this paper we investigate the power of various theoretically
and practically motivated classes of schedulers, considering the classic
complete-information view and a restriction to non-prophetic schedulers. We
prove a hierarchy of scheduler classes w.r.t. unbounded probabilistic
reachability. We find that, unlike Markovian formalisms, stochastic automata
distinguish most classes even in this basic setting. Verification and strategy
synthesis methods thus face a tradeoff between powerful and efficient classes.
Using lightweight scheduler sampling, we explore this tradeoff and demonstrate
the concept of a useful approximative verification technique for stochastic
automata
A review of R-packages for random-intercept probit regression in small clusters
Generalized Linear Mixed Models (GLMMs) are widely used to model clustered categorical outcomes. To tackle the intractable integration over the random effects distributions, several approximation approaches have been developed for likelihood-based inference. As these seldom yield satisfactory results when analyzing binary outcomes from small clusters, estimation within the Structural Equation Modeling (SEM) framework is proposed as an alternative. We compare the performance of R-packages for random-intercept probit regression relying on: the Laplace approximation, adaptive Gaussian quadrature (AGQ), Penalized Quasi-Likelihood (PQL), an MCMC-implementation, and integrated nested Laplace approximation within the GLMM-framework, and a robust diagonally weighted least squares estimation within the SEM-framework. In terms of bias for the fixed and random effect estimators, SEM usually performs best for cluster size two, while AGQ prevails in terms of precision (mainly because of SEM's robust standard errors). As the cluster size increases, however, AGQ becomes the best choice for both bias and precision
Improved Cross-Entropy Method for Estimation
The cross-entropy (CE) method is an adaptive importance sampling procedure that has been successfully applied to a diverse range of complicated simulation problems. However, recent research has shown that in some high-dimensional settings, the likelihood ratio degeneracy problem becomes severe and the importance sampling estimator obtained from the CE algorithm becomes unreliable. We consider a variation of the CE method whose performance does not deteriorate as the dimension of the problem increases. We then illustrate the algorithm via a high-dimensional estimation problem in risk management
- âŠ