37,166 research outputs found
An allocation scheme for estimating the reliability of a parallel-series system
We give a hybrid two stage design which can be useful to estimate the
reliability of a parallel-series and/or by duality a series-parallel system,
when the component reliabilities are unknown as well as the total numbers of
units allowed to be tested in each subsystem. When a total sample size is fixed
large, asymptotic optimality is proved systematically and validated via Monte
Carlo simulation.Comment: 16 pages, 4 figure
Sequential Designs with Application in Software Engineering
Title from PDF of title page, viewed on March 31, 2014Dissertation advisor: Kamel RakabVitaIncludes bibliographical references (pages 77-81)Thesis (Ph. D.)--Dept. of Mathematics and Statistics and Dept. of Computer Science and Electrical Engineering. University of Missouri, Kansas City, 2013Presented here is a Bayesian approach to test case allocation in the software reliability estimation. Bayesian analysis allows us to update our beliefs about the reliability of a particular partition as we test, and thus, dynamically re refine our allocation of test cases during the reliability testing process. We started with a fully sequential sampling scheme to estimate the reliability of a software system using partition testing. We have shown both theoretically and through simulation that the proposed scheme always performs at least as well as fixed sampling approaches where test case allocation is predetermined, and in all but the most unlikely circumstances, outperform them. Based on the sequential allocation, a multistage sampling scheme is established, which is less time consuming and more e efficient. Meanwhile, an e efficient sampling scheme is also developed to accommodate more situations. In the last chapter, we extend our study from parallel systems to series systems. We again use a Bayesian approach to allocate test cases to estimate the reliability of a series system with two components. A second-order lower bound for the incurred Bayes risk is established theoretically and Monte Carlo simulations with several proposed sequential designs are implemented to achieve this second-order lower bound for the incurred Bayes risk is established theoretically and Monte Carlo simulations with several proposed sequential designs are implemented to achieve this second-order lower bound.Abstract -- List of tables -- List of notations -- Acknowledgement -- Introduction -- A fully sequential test allocation for software reliability estimation -- A multistage sequential test allocation for software reliability estimation -- An efficient test allocation for software reliability estimation -- Test allocation for estimating reliability of series systems with two components -- Summary and conclusion -- Appendix -- Tables -- Referenc
Bayesian sequential estimation of the reliability of a parallel-series system
We give a risk-averse solution to the problem of estimating the reliability
of a parallel-series system. We adopt a beta-binomial model for components
reliabilities, and assume that the total sample size for the experience is
fixed. The allocation at subsystems or components level may be random. Based on
the sampling schemes for parallel and series systems separately, we propose a
hybrid sequential scheme for the parallel-series system. Asymptotic optimality
of the Bayes risk associated with quadratic loss is proved with the help of
martingale convergence properties.Comment: 12 page
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Power quality and electromagnetic compatibility: special report, session 2
The scope of Session 2 (S2) has been defined as follows by the Session Advisory Group and the Technical Committee: Power Quality (PQ), with the more general concept of electromagnetic compatibility (EMC) and with some related safety problems in electricity distribution systems.
Special focus is put on voltage continuity (supply reliability, problem of outages) and voltage quality (voltage level, flicker, unbalance, harmonics). This session will also look at electromagnetic compatibility (mains frequency to 150 kHz), electromagnetic interferences and electric and magnetic fields issues. Also addressed in this session are electrical safety and immunity concerns (lightning issues, step, touch and transferred voltages).
The aim of this special report is to present a synthesis of the present concerns in PQ&EMC, based on all selected papers of session 2 and related papers from other sessions, (152 papers in total). The report is divided in the following 4 blocks:
Block 1: Electric and Magnetic Fields, EMC, Earthing systems
Block 2: Harmonics
Block 3: Voltage Variation
Block 4: Power Quality Monitoring
Two Round Tables will be organised:
- Power quality and EMC in the Future Grid (CIGRE/CIRED WG C4.24, RT 13)
- Reliability Benchmarking - why we should do it? What should be done in future? (RT 15
Extraction of the underlying structure of systematic risk from non-Gaussian multivariate financial time series using independent component analysis: Evidence from the Mexican stock exchange
Regarding the problems related to multivariate non-Gaussianity of financial time series, i.e., unreliable results in extraction of underlying risk factors -via Principal Component Analysis or Factor Analysis-, we use Independent Component Analysis (ICA) to estimate the pervasive risk factors that explain the returns on stocks in the Mexican Stock Exchange. The extracted systematic risk factors are considered within a statistical definition of the Arbitrage Pricing Theory (APT), which is tested by means of a two-stage econometric methodology. Using the extracted factors, we find evidence of a suitable estimation via ICA and some results in favor of the APT.Peer ReviewedPostprint (published version
- …