10,269 research outputs found
Recommended from our members
Uncertainty explicit assessment of off-the-shelf software: A Bayesian approach
Assessment of software COTS components is an essential part of component-based software development. Poorly chosen components may lead to solutions of low quality and that are difficult to maintain. The assessment may be based on incomplete knowledge about the COTS component itself and other aspects (e.g. vendor’s credentials, etc.), which may affect the decision of selecting COTS component(s). We argue in favor of assessment methods in which uncertainty is explicitly represented (‘uncertainty explicit’ methods) using probability distributions. We provide details of a Bayesian model, which can be used to capture the uncertainties in the simultaneous assessment of two attributes, thus, also capturing the dependencies that might exist between them. We also provide empirical data from the use of this method for the assessment of off-the-shelf database servers which illustrate the advantages of ‘uncertainty explicit’ methods over conventional methods of COTS component assessment which assume that at the end of the assessment the values of the attributes become known with certainty
Recommended from our members
Bayesian belief network model for the safety assessment of nuclear computer-based systems
The formalism of Bayesian Belief Networks (BBNs) is being increasingly applied to probabilistic modelling and decision problems in a widening variety of fields. This method provides the advantages of a formal probabilistic model, presented in an easily assimilated visual form, together with the ready availability of efficient computational methods and tools for exploring model consequences. Here we formulate one BBN model of a part of the safety assessment task for computer and software based nuclear systems important to safety. Our model is developed from the perspective of an independent safety assessor who is presented with the task of evaluating evidence from disparate sources: the requirement specification and verification documentation of the system licensee and of the system manufacturer; the previous reputation of the various participants in the design process; knowledge of commercial pressures;information about tools and resources used; and many other sources. Based on these multiple sources of evidence, the independent assessor is ultimately obliged to make a decision as to whether or not the system should be licensed for operation within a particular nuclear plant environment. Our BBN model is a contribution towards a formal model of this decision problem. We restrict attention to a part of this problem: the safety analysis of the Computer System Specification documentation. As with other BBN applications we see this modelling activity as having several potential benefits. It employs a rigorous formalism as a focus for examination, discussion, and criticism of arguments about safety. It obliges the modeller to be very explicit about assumptions concerning probabilistic dependencies, correlations, and causal relationships. It allows sensitivity analyses to be carried out. Ultimately we envisage this BBN, or some later development of it, forming part of a larger model, which might well take the form of a larger BBN model, covering all sources of evidence about pre-operational life-cycle stages. This could provide an integrated model of all aspects of the task of the independent assessor, leading up to the final judgement about system safety in a particular context. We expect to offer some results of this further work later in the DeVa project
Beyond subjective and objective in statistics
We argue that the words "objectivity" and "subjectivity" in statistics
discourse are used in a mostly unhelpful way, and we propose to replace each of
them with broader collections of attributes, with objectivity replaced by
transparency, consensus, impartiality, and correspondence to observable
reality, and subjectivity replaced by awareness of multiple perspectives and
context dependence. The advantage of these reformulations is that the
replacement terms do not oppose each other. Instead of debating over whether a
given statistical method is subjective or objective (or normatively debating
the relative merits of subjectivity and objectivity in statistical practice),
we can recognize desirable attributes such as transparency and acknowledgment
of multiple perspectives as complementary goals. We demonstrate the
implications of our proposal with recent applied examples from pharmacology,
election polling, and socioeconomic stratification.Comment: 35 page
Development of Interactive Support Systems for Multiobjective Decision Analysis under Uncertainty
This paper presents interactive multiobjective decision analysis support systems, called MIDASS, which is a newly developed interactive computer program for strategic use of expected utility theory. Decision analysis based on expected utility hypothesis is an established prescriptive approach for supporting business decisions under uncertainty, which embodies an effective procedure for seeking the best choice among alternatives. It is usually difficult, however, for the decision maker (DM) to apply it for the strategic use in the realistic business situations. MIDASS provides an integrated interactive computer system for supporting multiobjective decision analysis under uncertainty, which assists to derive an acceptable business solution for DM with the construction of his/her expected multiattribute utility fuction (EMUF).expected multiobjective decision analysis, MIDASS, expected multiattribute utility function (EMUF), intelligent decision support systems (IDSS).
Solving multiple-criteria R&D project selection problems with a data-driven evidential reasoning rule
In this paper, a likelihood based evidence acquisition approach is proposed
to acquire evidence from experts'assessments as recorded in historical
datasets. Then a data-driven evidential reasoning rule based model is
introduced to R&D project selection process by combining multiple pieces of
evidence with different weights and reliabilities. As a result, the total
belief degrees and the overall performance can be generated for ranking and
selecting projects. Finally, a case study on the R&D project selection for the
National Science Foundation of China is conducted to show the effectiveness of
the proposed model. The data-driven evidential reasoning rule based model for
project evaluation and selection (1) utilizes experimental data to represent
experts' assessments by using belief distributions over the set of final
funding outcomes, and through this historic statistics it helps experts and
applicants to understand the funding probability to a given assessment grade,
(2) implies the mapping relationships between the evaluation grades and the
final funding outcomes by using historical data, and (3) provides a way to make
fair decisions by taking experts' reliabilities into account. In the
data-driven evidential reasoning rule based model, experts play different roles
in accordance with their reliabilities which are determined by their previous
review track records, and the selection process is made interpretable and
fairer. The newly proposed model reduces the time-consuming panel review work
for both managers and experts, and significantly improves the efficiency and
quality of project selection process. Although the model is demonstrated for
project selection in the NSFC, it can be generalized to other funding agencies
or industries.Comment: 20 pages, forthcoming in International Journal of Project Management
(2019
Recommended from our members
Uncertainty explicit assessment of off-the-shelf software: Selection of an optimal diverse pair
Assessment of software COTS components is an essential part of component-based software development. Sub-optimal selection of components may lead to solutions with low quality. The assessment is based on incomplete knowledge about the COTS components themselves and other aspects, which may affect the choice such as the vendor's credentials, etc. We argue in favor of assessment methods in which uncertainty is explicitly represented (`uncertainty explicit' methods) using probability distributions. We have adapted a model (developed elsewhere by Littlewood, B. et al. (2000)) for assessment of a pair of COTS components to take account of the fault (bug) logs that might be available for the COTS components being assessed. We also provide empirical data from a study we have conducted with off-the-shelf database servers, which illustrate the use of the method
Discriminative conditional restricted Boltzmann machine for discrete choice and latent variable modelling
Conventional methods of estimating latent behaviour generally use attitudinal
questions which are subjective and these survey questions may not always be
available. We hypothesize that an alternative approach can be used for latent
variable estimation through an undirected graphical models. For instance,
non-parametric artificial neural networks. In this study, we explore the use of
generative non-parametric modelling methods to estimate latent variables from
prior choice distribution without the conventional use of measurement
indicators. A restricted Boltzmann machine is used to represent latent
behaviour factors by analyzing the relationship information between the
observed choices and explanatory variables. The algorithm is adapted for latent
behaviour analysis in discrete choice scenario and we use a graphical approach
to evaluate and understand the semantic meaning from estimated parameter vector
values. We illustrate our methodology on a financial instrument choice dataset
and perform statistical analysis on parameter sensitivity and stability. Our
findings show that through non-parametric statistical tests, we can extract
useful latent information on the behaviour of latent constructs through machine
learning methods and present strong and significant influence on the choice
process. Furthermore, our modelling framework shows robustness in input
variability through sampling and validation
A contribution to supply chain design under uncertainty
Dans le contexte actuel des chaînes logistiques, des processus d'affaires complexes et des partenaires étendus, plusieurs facteurs peuvent augmenter les chances de perturbations dans les chaînes logistiques, telles que les pertes de clients en raison de l'intensification de la concurrence, la pénurie de l'offre en raison de l'incertitude des approvisionnements, la gestion d'un grand nombre de partenaires, les défaillances et les pannes imprévisibles, etc. Prévoir et répondre aux changements qui touchent les chaînes logistiques exigent parfois de composer avec des incertitudes et des informations incomplètes. Chaque entité de la chaîne doit être choisie de façon efficace afin de réduire autant que possible les facteurs de perturbations. Configurer des chaînes logistiques efficientes peut garantir la continuité des activités de la chaîne en dépit de la présence d'événements perturbateurs. L'objectif principal de cette thèse est la conception de chaînes logistiques qui résistent aux perturbations par le biais de modèles de sélection d'acteurs fiables. Les modèles proposés permettent de réduire la vulnérabilité aux perturbations qui peuvent aV, oir un impact sur la continuité des opérations des entités de la chaîne, soient les fournisseurs, les sites de production et les sites de distribution. Le manuscrit de cette thèse s'articule autour de trois principaux chapitres: 1 - Construction d'un modèle multi-objectifs de sélection d'acteurs fiables pour la conception de chaînes logistiques en mesure de résister aux perturbations. 2 - Examen des différents concepts et des types de risques liés aux chaînes logistiques ainsi qu'une présentation d'une approche pour quantifier le risque. 3 - Développement d'un modèle d'optimisation de la fiabilité afin de réduire la vulnérabilité aux perturbations des chaînes logistiques sous l'incertitude de la sollicitation et de l'offre
- …