6,305 research outputs found
Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study
Many automated system analysis techniques (e.g., model checking, model-based
testing) rely on first obtaining a model of the system under analysis. System
modeling is often done manually, which is often considered as a hindrance to
adopt model-based system analysis and development techniques. To overcome this
problem, researchers have proposed to automatically "learn" models based on
sample system executions and shown that the learned models can be useful
sometimes. There are however many questions to be answered. For instance, how
much shall we generalize from the observed samples and how fast would learning
converge? Or, would the analysis result based on the learned model be more
accurate than the estimation we could have obtained by sampling many system
executions within the same amount of time? In this work, we investigate
existing algorithms for learning probabilistic models for model checking,
propose an evolution-based approach for better controlling the degree of
generalization and conduct an empirical study in order to answer the questions.
One of our findings is that the effectiveness of learning may sometimes be
limited.Comment: 15 pages, plus 2 reference pages, accepted by FASE 2017 in ETAP
MIMO-aided near-capacity turbo transceivers: taxonomy and performance versus complexity
In this treatise, we firstly review the associated Multiple-Input Multiple-Output (MIMO) system theory and review the family of hard-decision and soft-decision based detection algorithms in the context of Spatial Division Multiplexing (SDM) systems. Our discussions culminate in the introduction of a range of powerful novel MIMO detectors, such as for example Markov Chain assisted Minimum Bit-Error Rate (MC-MBER) detectors, which are capable of reliably operating in the challenging high-importance rank-deficient scenarios, where there are more transmitters than receivers and hence the resultant channel-matrix becomes non-invertible. As a result, conventional detectors would exhibit a high residual error floor. We then invoke the Soft-Input Soft-Output (SISO) MIMO detectors for creating turbo-detected two- or three-stage concatenated SDM schemes and investigate their attainable performance in the light of their computational complexity. Finally, we introduce the powerful design tools of EXtrinsic Information Transfer (EXIT)-charts and characterize the achievable performance of the diverse near- capacity SISO detectors with the aid of EXIT charts
AI Methods in Algorithmic Composition: A Comprehensive Survey
Algorithmic composition is the partial or total automation of the process of music composition
by using computers. Since the 1950s, different computational techniques related to
Artificial Intelligence have been used for algorithmic composition, including grammatical
representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint
programming and evolutionary algorithms. This survey aims to be a comprehensive
account of research on algorithmic composition, presenting a thorough view of the field for
researchers in Artificial Intelligence.This study was partially supported by a grant for the MELOMICS project
(IPT-300000-2010-010) from the Spanish Ministerio de Ciencia e Innovación, and a grant for
the CAUCE project (TSI-090302-2011-8) from the Spanish Ministerio de Industria, Turismo
y Comercio. The first author was supported by a grant for the GENEX project (P09-TIC-
5123) from the Consejería de Innovación y Ciencia de Andalucía
Relaxing Synchronization in Distributed Simulated Annealing
Simulated annealing is an attractive, but expensive, heuristic for approximating the solution to combinatorial optimization problems. Since simulated annealing is a general purpose method, it can be applied to the broad range of NP-complete problems such as the traveling salesman problem, graph theory, and cell placement with a careful control of the cooling schedule.
Attempts to parallelize simulated annealing, particularly on distributed memory multicomputers, are hampered by the algorithm’s requirement of a globally consistent system state. In a multicomputer, maintaining the global state S involves explicit message traffic and is a critical performance bottleneck. One way to mitigate this bottleneck is to amortize the overhead of these state updates over as many parallel state changes as possible. By using this technique, errors in the actual cost C(S) of a particular state S will be introduced into the annealing process.
This dissertation places analytically derived bounds on the cost error in order to assure convergence to the correct result. The resulting parallel Simulated Annealing algorithm dynamically changes the frequency of global updates as a function of the annealing control parameter, i.e. temperature. Implementation results on an Intel iPSC/2 are reported
Approximation Techniques for Stochastic Analysis of Biological Systems
There has been an increasing demand for formal methods in the design process
of safety-critical synthetic genetic circuits. Probabilistic model checking
techniques have demonstrated significant potential in analyzing the intrinsic
probabilistic behaviors of complex genetic circuit designs. However, its
inability to scale limits its applicability in practice. This chapter addresses
the scalability problem by presenting a state-space approximation method to
remove unlikely states resulting in a reduced, finite state representation of
the infinite-state continuous-time Markov chain that is amenable to
probabilistic model checking. The proposed method is evaluated on a design of a
genetic toggle switch. Comparisons with another state-of-art tool demonstrates
both accuracy and efficiency of the presented method
A survey on computational intelligence approaches for predictive modeling in prostate cancer
Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex forconventional statistical techniques to process quickly and eciently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty and imprecision which is typically found in clinical and biological datasets. This paper provides a survey of recent work on computational intelligence approaches that have been applied to prostate cancer predictive modeling, and considers the challenges which need to be addressed. In particular, the paper considers a broad definition of computational intelligence which includes evolutionary algorithms (also known asmetaheuristic optimisation, nature inspired optimisation algorithms), Artificial Neural Networks, Deep Learning, Fuzzy based approaches, and hybrids of these,as well as Bayesian based approaches, and Markov models. Metaheuristic optimisation approaches, such as the Ant Colony Optimisation, Particle Swarm Optimisation, and Artificial Immune Network have been utilised for optimising the performance of prostate cancer predictive models, and the suitability of these approaches are discussed
- …