1,034,582 research outputs found
Modelling and managing reliability growth during the engineering design process
[This is a keynote speech presented at the 2nd International Conference on Design Engineering and Science, discussing modelling and managing reliability growth during the engineering process.] Reliability is vital for safe and efficient operation of systems. Decisions about the configuration and selection of parts within a system, and the development activities to prove the chosen design, will influence the inherent reliability. Modelling provides a mechanism for explicating the relationship between the engineering activities and the statistical measures of reliability so that useful estimates of reliability can be obtained. Reliability modelling should be aligned to support the decisions taken during design and development. We examine why and how a reliability growth model can be structured, the type of data required and available to populate them, the selection of relevant summary measures, the process for updating estimates and feeding back into design to support planning decisions. The modelling process described is informed by our theoretical background in management science and our practical experience of working with UK industry
On Power Allocation for Distributed Detection with Correlated Observations and Linear Fusion
We consider a binary hypothesis testing problem in an inhomogeneous wireless
sensor network, where a fusion center (FC) makes a global decision on the
underlying hypothesis. We assume sensors observations are correlated Gaussian
and sensors are unaware of this correlation when making decisions. Sensors send
their modulated decisions over fading channels, subject to individual and/or
total transmit power constraints. For parallel-access channel (PAC) and
multiple-access channel (MAC) models, we derive modified deflection coefficient
(MDC) of the test statistic at the FC with coherent reception.We propose a
transmit power allocation scheme, which maximizes MDC of the test statistic,
under three different sets of transmit power constraints: total power
constraint, individual and total power constraints, individual power
constraints only. When analytical solutions to our constrained optimization
problems are elusive, we discuss how these problems can be converted to convex
ones. We study how correlation among sensors observations, reliability of local
decisions, communication channel model and channel qualities and transmit power
constraints affect the reliability of the global decision and power allocation
of inhomogeneous sensors
New England Overview: A Guide to Large-Scale Energy Infrastructure Issues in 2015
The report outlines how regional electricity and natural gas infrastructure decisions are made. It examines the current proposals to expand electricity transmission lines and natural gas pipelines into New England, as solutions to electricity and gas price and reliability issues, and briefly discusses the major implications of both
Using a Bayesian averaging model for estimating the reliability of decisions in multimodal biometrics
The issue of reliable authentication is of increasing importance in modern society. Corporations, businesses and individuals often wish to restrict access to logical or physical resources to those with relevant privileges. A popular method for authentication is the use of biometric data, but the uncertainty that arises due to the lack of uniqueness in biometrics has lead there to be a great deal of effort invested into multimodal biometrics. These multimodal biometric systems can give rise to large, distributed data sets that are used to decide the authenticity of a user. Bayesian model averaging (BMA) methodology has been used to allow experts to evaluate the reliability of decisions made in data mining applications. The use of decision tree (DT) models within the BMA methodology gives experts additional information on how decisions are made. In this paper we discuss how DT models within the BMA methodology can be used for authentication in multimodal biometric systems
Implementing Snow Load Monitoring to Control Reliability of a Stadium Roof
This contribution shows how monitoring can be
used to control reliability of a structure not complying
with the requirements of Eurocodes. A general
methodology to obtain cost-optimal decisions using limit
state design, probabilistic reliability analysis and cost
estimates is utilised in a full-scale case study dealing with
the roof of a stadium located in Northern Italy. The
results demonstrate the potential of monitoring systems
and probabilistic reliability analysis to support decisions
regarding safety measures such as snow removal, or
temporary closure of the stadium
Cost-benefit modelling for reliability growth
Decisions during the reliability growth development process of engineering equipment involve trade-offs between cost and risk. However slight, there exists a chance an item of equipment will not function as planned during its specified life. Consequently the producer can incur a financial penalty. To date, reliability growth research has focussed on the development of models to estimate the rate of failure from test data. Such models are used to support decisions about the effectiveness of options to improve reliability. The extension of reliability growth models to incorporate financial costs associated with 'unreliability' is much neglected. In this paper, we extend a Bayesian reliability growth model to include cost analysis. The rationale of the stochastic process underpinning the growth model and the cost structures are described. The ways in which this model can be used to support cost-benefit analysis during product development are discussed and illustrated through a simple case
Solution of geometrically nonlinear statics problems by the p-version of the finite element method
This project is concerned with the possibility of using computers for the simulation of structural systems with the same degree of reliability as full scale physical experiments. Reliable numerical simulation will make it possible to reduce the costs of engineering and improve the quality of engineering decisions based on computed information. An error of idealization is an error between the actual physical quantities on which engineering decisions are based (e.g., maximum principal stress, first natural frequency, etc.) and the same data corresponding to the exact solution of the mathematical model. An error of discretization is an error between the quantities of interest corresponding to the exact and approximate solutions of a mathematical model. A high degree of reliability can be achieved in numerical simulation only if both the errors of idealization and errors of discretization can be shown to be small
Combining Neuro-Fuzzy Classifiers for Improved Generalisation and Reliability
In this paper a combination of neuro-fuzzy
classifiers for improved classification performance and reliability
is considered. A general fuzzy min-max (GFMM) classifier with
agglomerative learning algorithm is used as a main building
block. An alternative approach to combining individual classifier
decisions involving the combination at the classifier model level is
proposed. The resulting classifier complexity and transparency is
comparable with classifiers generated during a single crossvalidation
procedure while the improved classification
performance and reduced variance is comparable to the ensemble
of classifiers with combined (averaged/voted) decisions. We also
illustrate how combining at the model level can be used for
speeding up the training of GFMM classifiers for large data sets
Confidence in assessment decisions when using ICT
The central question addressed in this paper is: How can teachers and schools have confidence in their assessment decisions when using information communication technologies (ICT)? The answer centres on what makes quality assessment. Assessing and evaluating children’s achievement and progress is critical to development of sound curriculum programmes that focus on student outcomes. With the increasing use of ICT in schools and classrooms for a range of assessment purposes such as recording, data analysis and online activities, teachers and school leaders must be assessment capable in order to make informed decisions about assessment design, selection and modification that utilises ICT. Based on examining assessment purpose and the three principles of quality assessment (validity, reliability and manageability), this paper offers guidelines for classroom teachers, those with responsibility for student achievement and those who lead ICT policy and practice in schools to be critical consumers of ICT-based assessment tools, strategies and evidence. Vignettes of assessment practice using ICT are used to illustrate sound school and classroom practices in relation to validity, reliability, and manageability. Drawing from the work of assessment writers such as Crooks, Sutton, and Darr, the guidelines will assist teachers in the effective use of ICT for both formal and informal information gathering as well as for analysis and interpretation of information for summative and formative purposes. This knowledge is needed to underpin teacher confidence in their assessment decisions when using ICT towards ‘best fit’ for purpose
- …
