30,527 research outputs found
Using graphical models and multi-attribute utility theory for probabilistic uncertainty handling in large systems, with application to nuclear emergency management
Although many decision-making problems involve uncertainty, uncertainty handling within large decision support systems (DSSs) is challenging. One domain where uncertainty handling is critical is emergency response management, in particular nuclear emergency response, where decision making takes place in an uncertain, dynamically changing environment. Assimilation and analysis of data can help to reduce these uncertainties, but it is critical to do this in an efficient and defensible way. After briefly introducing the structure of a typical DSS for nuclear emergencies, the paper sets up a theoretical structure that enables a formal Bayesian decision analysis to be performed for environments like this within a DSS architecture. In such probabilistic DSSs many input conditional probability distributions are provided by different sets of experts overseeing different aspects of the emergency. These probabilities are then used by the decision maker (DM) to find her optimal decision. We demonstrate in this paper that unless due care is taken in such a composite framework, coherence and rationality may be compromised in a sense made explicit below. The technology we describe here builds a framework around which Bayesian data updating can be performed in a modular way, ensuring both coherence and efficiency, and provides sufficient unambiguous information to enable the DM to discover her expected utility maximizing policy
Expert Elicitation for Reliable System Design
This paper reviews the role of expert judgement to support reliability
assessments within the systems engineering design process. Generic design
processes are described to give the context and a discussion is given about the
nature of the reliability assessments required in the different systems
engineering phases. It is argued that, as far as meeting reliability
requirements is concerned, the whole design process is more akin to a
statistical control process than to a straightforward statistical problem of
assessing an unknown distribution. This leads to features of the expert
judgement problem in the design context which are substantially different from
those seen, for example, in risk assessment. In particular, the role of experts
in problem structuring and in developing failure mitigation options is much
more prominent, and there is a need to take into account the reliability
potential for future mitigation measures downstream in the system life cycle.
An overview is given of the stakeholders typically involved in large scale
systems engineering design projects, and this is used to argue the need for
methods that expose potential judgemental biases in order to generate analyses
that can be said to provide rational consensus about uncertainties. Finally, a
number of key points are developed with the aim of moving toward a framework
that provides a holistic method for tracking reliability assessment through the
design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287],
[arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at
http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
An Integrated Assessment approach to linking biophysical modelling and economic valuation tools
Natural resource management (NRM) typically involves complex decisions that affect a variety of stakeholder values. Efficient NRM, which achieves the greatest net environmental, social and financial benefits, needs to integrate the assessment of environmental impacts with the costs and benefits of investment. Integrated assessment (IA) is one approach that incorporates the several dimensions of catchment NRM, by considering multiple issues and knowledge from various disciplines and stakeholders. Despite the need for IA, there are few studies that integrate biophysical modelling tools with economic valuation. In this paper, we demonstrate how economic non-market valuation tools can be used to support an IA of catchment NRM changes. We develop a Bayesian Network model that integrates: a process-based water quality model; ecological assessments of native riparian vegetation; estimates of management costs; and non-market (intangible) values of changes in riparian vegetation. This modelling approach illustrates how information from different sources can be integrated in one framework to evaluate the environmental and economic impacts of NRM actions. It also shows the uncertainties associated with the estimated welfare effects. By estimating the marginal social costs and benefits, a cost-benefit analysis of alternative management intervention can be gained and provides more economic rationality to NRM decisions.Bayesian networks, bio-economic modelling, catchment management, cost-benefit analysis, environmental values, integrated assessment and modelling, non-market valuation, riparian vegetation, Environmental Economics and Policy, Research Methods/ Statistical Methods,
A Survey of Prediction and Classification Techniques in Multicore Processor Systems
In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems
Prediction of survival probabilities with Bayesian Decision Trees
Practitioners use Trauma and Injury Severity Score (TRISS) models for predicting the survival probability of an injured patient. The accuracy of TRISS predictions is acceptable for patients with up to three typical injuries, but unacceptable for patients with a larger number of injuries or with atypical injuries. Based on a regression model, the TRISS methodology does not provide the predictive density required for accurate assessment of risk. Moreover, the regression model is difficult to interpret. We therefore consider Bayesian inference for estimating the predictive distribution of survival. The inference is based on decision tree models which recursively split data along explanatory variables, and so practitioners can understand these models. We propose the Bayesian method for estimating the predictive density and show that it outperforms the TRISS method in terms of both goodness-of-fit and classification accuracy. The developed method has been made available for evaluation purposes as a stand-alone application
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
A foundation for machine learning in design
This paper presents a formalism for considering the issues of learning in design. A foundation for machine learning in design (MLinD) is defined so as to provide answers to basic questions on learning in design, such as, "What types of knowledge can be learnt?", "How does learning occur?", and "When does learning occur?". Five main elements of MLinD are presented as the input knowledge, knowledge transformers, output knowledge, goals/reasons for learning, and learning triggers. Using this foundation, published systems in MLinD were reviewed. The systematic review presents a basis for validating the presented foundation. The paper concludes that there is considerable work to be carried out in order to fully formalize the foundation of MLinD
Trust in Vehicle-to-Vehicle Communication
In traditional Pedestrian Automatic Emergency Braking (PAEB) system, vehicles equipped with onboard sensors such as radar, camera, and infrared detect pedestrians, alert the driver and/ or automatically take actions to prevent vehicle-pedestrian collision. In some situations, a vehicle may not be able to detect a pedestrian due to blind spots. Such a vehicle could benefit from the sensor data from neighboring vehicles in making such safety critical decisions. We propose a trust model for ensuring shared data are valid and trustworthy for use in making safety critical decisions. Simulation results of the proposed trust model show promise
- …