8,191 research outputs found
PRISM: a tool for automatic verification of probabilistic systems
Probabilistic model checking is an automatic formal verification technique for analysing quantitative properties of systems which exhibit stochastic behaviour. PRISM is a probabilistic model checking tool which has already been successfully deployed in a wide range of application domains, from real-time communication protocols to biological signalling pathways. The tool has recently undergone a significant amount of development. Major additions include facilities to manually explore models, Monte-Carlo discrete-event simulation techniques for approximate model analysis (including support for distributed simulation) and the ability to compute cost- and reward-based measures, e.g. "the expected energy consumption of the system before the first failure occurs". This paper presents an overview of all the main features of PRISM. More information can be found on the website: www.cs.bham.ac.uk/~dxp/prism
Recommended from our members
Designing sustainable medical devices
Stakeholders in the medical device manufacturing industry are becoming more concerned about the environmental impact of their products and processes. The consumers are also becoming more aware of the negative impact that manufacturers can have on the environment. Government initiatives continue to increase environmental awareness through the development of new policy and legislation, encouraging industry to become more accountable for the environmental impact of their products and operations. The ISO 14001 standard, Environmental Management Systems-Requirements with Guidance for Use, sets guidelines to enable businesses to recognize the environmental effects of their products and processes. Departments can use the tool to set targets to lower the environmental impact and identify areas of high environmental concern when designing, purchasing, and marketing products. Research in these areas will be used to develop the environmental scoring tool to aid in the design of future sustainable medical devices
Global adaptation in networks of selfish components: emergent associative memory at the system scale
In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organise into structures that enhance global adaptation, efficiency or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalisation and optimisation, are well-understood. Such global functions within a single agent or organism are not wholly surprising since the mechanisms (e.g. Hebbian learning) that create these neural organisations may be selected for this purpose, but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviours when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully-distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g. when they can influence which other agents they interact with) then, in adapting these inter-agent relationships to maximise their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviours as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalise by idealising stored patterns and/or creating new combinations of sub-patterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviours in the same sense, and by the same mechanism, as the organisational principles familiar in connectionist models of organismic learning
Who Said What: Modeling Individual Labelers Improves Classification
Data are often labeled by many different experts with each expert only
labeling a small fraction of the data and each data point being labeled by
several experts. This reduces the workload on individual experts and also gives
a better estimate of the unobserved ground truth. When experts disagree, the
standard approaches are to treat the majority opinion as the correct label or
to model the correct label as a distribution. These approaches, however, do not
make any use of potentially valuable information about which expert produced
which label. To make use of this extra information, we propose modeling the
experts individually and then learning averaging weights for combining them,
possibly in sample-specific ways. This allows us to give more weight to more
reliable experts and take advantage of the unique strengths of individual
experts at classifying certain types of data. Here we show that our approach
leads to improvements in computer-aided diagnosis of diabetic retinopathy. We
also show that our method performs better than competing algorithms by Welinder
and Perona (2010), and by Mnih and Hinton (2012). Our work offers an innovative
approach for dealing with the myriad real-world settings that use expert
opinions to define labels for training.Comment: AAAI 201
Preasymptotic multiscaling in the phase-ordering dynamics of the kinetic Ising model
The evolution of the structure factor is studied during the phase-ordering
dynamics of the kinetic Ising model with conserved order parameter. A
preasymptotic multiscaling regime is found as in the solution of the
Cahn-Hilliard-Cook equation, revealing that the late stage of phase-ordering is
always approached through a crossover from multiscaling to standard scaling,
independently from the nature of the microscopic dynamics.Comment: 11 pages, 3 figures, to be published in Europhys. Let
Attitude determination of the spin-stabilized Project Scanner spacecraft
Attitude determination of spin-stabilized spacecraft using star mapping techniqu
Inducing Language Networks from Continuous Space Word Representations
Recent advancements in unsupervised feature learning have developed powerful
latent representations of words. However, it is still not clear what makes one
representation better than another and how we can learn the ideal
representation. Understanding the structure of latent spaces attained is key to
any future advancement in unsupervised learning. In this work, we introduce a
new view of continuous space word representations as language networks. We
explore two techniques to create language networks from learned features by
inducing them for two popular word representation methods and examining the
properties of their resulting networks. We find that the induced networks
differ from other methods of creating language networks, and that they contain
meaningful community structure.Comment: 14 page
ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids
We introduce an unsupervised feature learning approach that embeds 3D shape
information into a single-view image representation. The main idea is a
self-supervised training objective that, given only a single 2D image, requires
all unseen views of the object to be predictable from learned features. We
implement this idea as an encoder-decoder convolutional neural network. The
network maps an input image of an unknown category and unknown viewpoint to a
latent space, from which a deconvolutional decoder can best "lift" the image to
its complete viewgrid showing the object from all viewing angles. Our
class-agnostic training procedure encourages the representation to capture
fundamental shape primitives and semantic regularities in a data-driven
manner---without manual semantic labels. Our results on two widely-used shape
datasets show 1) our approach successfully learns to perform "mental rotation"
even for objects unseen during training, and 2) the learned latent space is a
powerful representation for object recognition, outperforming several existing
unsupervised feature learning methods.Comment: To appear at ECCV 201
- âŠ