9,082 research outputs found
Recommended from our members
Reliability Assessment of Legacy Safety-Critical Systems Upgraded with Fault-Tolerant Off-the-Shelf Software
This paper presents a new way of applying Bayesian assessment to systems, which consist of many components. Full Bayesian inference with such systems is problematic, because it is computationally hard and, far more seriously, one needs to specify a multivariate prior distribution with many counterintuitive dependencies between the probabilities of component failures. The approach taken here is one of decomposition. The system is decomposed into partial views of the systems or part thereof with different degrees of detail and then a mechanism of propagating the knowledge obtained with the more refined views back to the coarser views is applied (recalibration of coarse models). The paper describes the recalibration technique and then evaluates the accuracy of recalibrated models numerically on contrived examples using two techniques: u-plot and prequential likelihood, developed by others for software reliability growth models. The results indicate that the recalibrated predictions are often more accurate than the predictions obtained with the less detailed models, although this is not guaranteed. The techniques used to assess the accuracy of the predictions are accurate enough for one to be able to choose the model giving the most accurate prediction
Ellsberg Paradox: Ambiguity And Complexity Aversions Compared
We present a simple model where preferences with complexity aversion, rather than ambiguity aversion, resolve the Ellsberg paradox. We test our theory using laboratory experiments where subjects choose among lotteries that “range” from a simple risky lottery, through risky but more complex lotteries, to one similar to Ellsberg’s ambiguity urn. Our model ranks lotteries according to their complexity and makes different—at times contrasting—predictions than most models of ambiguity in response to manipulations of prizes. The results support that complexity aversion preferences play an important and separate role from beliefs with ambiguity aversion in explaining behavior under uncertainty
Recommended from our members
Assessing the reliability of diverse fault-tolerant software-based systems
We discuss a problem in the safety assessment of automatic control and protection systems. There is an increasing dependence on software for performing safety-critical functions, like the safety shut-down of dangerous plants. Software brings increased risk of design defects and thus systematic failures; redundancy with diversity between redundant channels is a possible defence. While diversity techniques can improve the dependability of software-based systems, they do not alleviate the difficulties of assessing whether such a system is safe enough for operation. We study this problem for a simple safety protection system consisting of two diverse channels performing the same function. The problem is evaluating its probability of failure in demand. Assuming failure independence between dangerous failures of the channels is unrealistic. One can instead use evidence from the observation of the whole system's behaviour under realistic test conditions. Standard inference procedures can then estimate system reliability, but they take no advantage of a system’s fault-tolerant structure. We show how to extend these techniques to take account of fault tolerance by a conceptually straightforward application of Bayesian inference. Unfortunately, the method is computationally complex and requires the conceptually difficult step of specifying 'prior' distributions for the parameters of interest. This paper presents the correct inference procedure, exemplifies possible pitfalls in its application and clarifies some non-intuitive issues about reliability assessment for fault-tolerant software
Publishing Efficient On-device Models Increases Adversarial Vulnerability
Recent increases in the computational demands of deep neural networks (DNNs)
have sparked interest in efficient deep learning mechanisms, e.g., quantization
or pruning. These mechanisms enable the construction of a small, efficient
version of commercial-scale models with comparable accuracy, accelerating their
deployment to resource-constrained devices.
In this paper, we study the security considerations of publishing on-device
variants of large-scale models. We first show that an adversary can exploit
on-device models to make attacking the large models easier. In evaluations
across 19 DNNs, by exploiting the published on-device models as a transfer
prior, the adversarial vulnerability of the original commercial-scale models
increases by up to 100x. We then show that the vulnerability increases as the
similarity between a full-scale and its efficient model increase. Based on the
insights, we propose a defense, -, that fine-tunes
on-device models with the objective of reducing the similarity. We evaluated
our defense on all the 19 DNNs and found that it reduces the transferability up
to 90% and the number of queries required by a factor of 10-100x. Our results
suggest that further research is needed on the security (or even privacy)
threats caused by publishing those efficient siblings.Comment: Accepted to IEEE SaTML 202
- …