290 research outputs found
Enforcing ?-Regular Properties in Markov Chains by Restarting
Restarts are used in many computer systems to improve performance. Examples include reloading a webpage, reissuing a request, or restarting a randomized search. The design of restart strategies has been extensively studied by the performance evaluation community. In this paper, we address the problem of designing universal restart strategies, valid for arbitrary finite-state Markov chains, that enforce a given ?-regular property while not knowing the chain. A strategy enforces a property ? if, with probability 1, the number of restarts is finite, and the run of the Markov chain after the last restart satisfies ?. We design a simple "cautious" strategy that solves the problem, and a more sophisticated "bold" strategy with an almost optimal number of restarts
The distributed ASCI supercomputer project
The Distributed ASCI Supercomputer (DAS) is a homogeneous wide-area distributed system consisting of four cluster computers at different locations. DAS has been used for research on communication software, parallel languages and programming systems, schedulers, parallel applications, and distributed applications. The paper gives a preview of the most interesting research results obtained so far in the DAS project
Evolutionary games on graphs
Game theory is one of the key paradigms behind many scientific disciplines
from biology to behavioral sciences to economics. In its evolutionary form and
especially when the interacting agents are linked in a specific social network
the underlying solution concepts and methods are very similar to those applied
in non-equilibrium statistical physics. This review gives a tutorial-type
overview of the field for physicists. The first three sections introduce the
necessary background in classical and evolutionary game theory from the basic
definitions to the most important results. The fourth section surveys the
topological complications implied by non-mean-field-type social network
structures in general. The last three sections discuss in detail the dynamic
behavior of three prominent classes of models: the Prisoner's Dilemma, the
Rock-Scissors-Paper game, and Competing Associations. The major theme of the
review is in what sense and how the graph structure of interactions can modify
and enrich the picture of long term behavioral patterns emerging in
evolutionary games.Comment: Review, final version, 133 pages, 65 figure
Fast Deterministic Consensus in a Noisy Environment
It is well known that the consensus problem cannot be solved
deterministically in an asynchronous environment, but that randomized solutions
are possible. We propose a new model, called noisy scheduling, in which an
adversarial schedule is perturbed randomly, and show that in this model
randomness in the environment can substitute for randomness in the algorithm.
In particular, we show that a simplified, deterministic version of Chandra's
wait-free shared-memory consensus algorithm (PODC, 1996, pp. 166-175) solves
consensus in time at most logarithmic in the number of active processes. The
proof of termination is based on showing that a race between independent
delayed renewal processes produces a winner quickly. In addition, we show that
the protocol finishes in constant time using quantum and priority-based
scheduling on a uniprocessor, suggesting that it is robust against the choice
of model over a wide range.Comment: Typographical errors fixe
A Survey of Monte Carlo Tree Search Methods
Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work
Recommended from our members
Reachability Analysis of Cyber-Physical Systems Using Symbolic-Numeric Techniques
In this thesis, we address the problem of reachability analysis in cyber-physical systems. These are systems engineered by interfacing computational components with the physical world. They provide partially or fully automated safety-critical services in the form of medical devices, autonomous vehicles, avionics and power systems.
We propose techniques to reason about the reachability of such systems, and provide methods for falsifying their safety properties. We model the cyber component as a software program and the physical component as a hybrid dynamical system. Unlike model based analysis, which uses either a purely symbolic or a numerical approach, we argue in favor of using a combination of the two. We justify this by noting that the software program running on a computer is completely specified and has precise semantics. In contrast, the model of the physical system is only an approximation. Hence, we treat the former as a white box, but treat the latter as a black box. Using symbolic methods for the cyber components and numerical methods for hybrid systems, we carefully capture the complex behaviors of software programs and circumvent the difficulty in analyzing complex models developed through first principles. To combine the two techniques, we use a Counterexample Guided Abstraction Refinement (CEGAR) framework. Furthermore, we explore learning techniques like regression and piecewise affine modeling to estimate and represent black box hybrid dynamical systems for the purpose of falsification.
We use prototype implementations to demonstrate the effectiveness of presented ideas. Using non-trivial benchmarks, we compare their performance against the state of the art. We also comment on their applicability and discuss ideas for further improvement
Predictive Modelling of Tribological Systems using Movable Cellular Automata
In the science of tribology, where there is an enormous degree of uncertainty, mathematical models that convey state-of-the-art scientific knowledge are invaluable tools for unveiling the underlying phenomena. A well-structured modelling framework that guarantees a connection between mathematical representations and experimental observations, can help in the systematic identification of the most realistic hypotheses among a pool of possibilities.
This thesis is concerned with identifying the most appropriate computational model for the prediction of friction and wear in tribological applications, and the development of a predictive model and simulation tool based on the identified method. Accordingly, a thorough review of the literature has been conducted to find the most appropriate approach for predicting friction and wear using computer simulations, with the multi-scale approach in mind. It was concluded that the Movable Cellular Automata (MCA) method is the most suitable method for multi-scale modelling of tribological systems.
It has been established from the state-of-the-art review in Chapter 2 of this thesis, that it is essential to be able to model continuous as well as discontinuous behaviour of materials on a range of scales from atomistic to micro scales to be able to simulate the first-bodies and third body simultaneously (also known as a multi-body) in a tribological system. This can only be done using a multi-scale particle-based method because continuum methods such as FEM are none-predictive and are not capable of describing the discontinuous nature of materials on the micro scale. The most important and well-known particle-based methods are molecular dynamics (MD) and the discrete element methods (DEM). Although MD has been widely used to simulate elastic and plastic deformation of materials, it is limited to the atomistic and nanoscales and cannot be used to simulate materials on the macro-scale. On the other hand, DEM is capable of simulating materials on the meso/micro scales and has been expanded since the algorithm was first proposed by Cundall and Strack, in 1979 and adopted by a number of scientific and engineering disciplines. However, it is limited to the simulation of granular materials and elastic brittle solid materials due to its contact configurations and laws. Even with the use of bond models to simulate cohesive and plastic materials, it shows major limitations with parametric estimations and validation against experimental results because its contact laws use parameters that cannot be directly obtained from the material properties or from experiments.
The MCA method solves these problems using a hybrid technique, combining advantages of the classical cellular automata method and molecular dynamics and forming a model for simulating elasticity, plasticity and fracture in ductile consolidated materials. It covers both the meso and micro scales, and can even “theoretically” be used on the nano scale if the simulation tool is computationally powerful enough. A distinguishing feature of the MCA method is the description of interaction of forces between automata in terms of stress tensor components. This way a direct relationship between the MCA model parameters of particle interactions and tensor parameters of material constitutive law is established. This makes it possible to directly simulate materials and to implement different models and criteria of elasticity, plasticity and fracture, and describe elastic-plastic deformation using the theory of plastic flow. Hence, in MCA there is no need for parametric fitting because all model parameters can be directly obtained from the material mechanical properties.
To model surfaces in contact and friction behaviour using MCA, the particle size can be chosen large enough to consider the contacting surface as a rough plane, which is the approach used in all MCA studies of contacting surfaces so far. The other approach is to specify a very small particle size so that it can directly simulate a real surface, which allows for the direct investigation of material behaviour and processes on all three scale levels (atomic, meso and macro) in an explicit form. This has still been proven difficult to do because it is too computationally extensive and only a small area of the contact can be simulated due to the high numbers of particles required to simulate a real solid. Furthermore, until now, no commercial software is available for MCA simulations, only a 2D MCA demo-version which was developed by the Laboratory of CAD of Materials at the Institute of Strength Physics and Materials Science in Tomsk, Russia, in 2005. The developers of the MCA method use their own in-house codes.
This thesis presents the successful development of a 3D MCA open-source software for the scientific and tribology communities to use. This was done by implementing the MCA method within the framework of the open-source code LIGGGHTS. It follows the formulations of the 3D elastic-plastic model developed by the authors including Sergey G. Psakhie, Valentin L. Popov, Evgeny V. Shilko, and the external supervisor on this thesis Alexey Yu. Smolin, which has been successfully implemented in the open-source code LIGGGHTS. Details of the mathematical formulations can be found in [1]–[3], and section 3.5 of this thesis.
The MCA model has been successfully implemented to simulate ductile consolidated materials. Specifically, new interaction laws were implemented, as well as features related to particle packing, particle interaction forces, bonding of particles, and others. The model has also been successfully verified, validated, and used in simulating indentation. The validation against experimental results showed that using the developed model, correct material mechanical response can be simulated using direct macroscopic mechanical material properties.
The implemented code still shows limitations in terms of computational capacity because the parallelization of the code has not been completely implemented yet. Nevertheless, this thesis extends the capabilities of LIGGGHTS software to provide an open-source tool for using the MCA method to simulate solid material deformation behaviour. It also significantly increases the potential of using MCA in an HPC environment, producing results otherwise difficult to obtain
Recommended from our members
Towards a legal definition of machine intelligence: the argument for artificial personhood in the age of deep learning.
The paper dissects the intricacies of Automated Decision Making (ADM) and urges for refining the current legal definition of AI when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. ADM relies upon a plethora of algorithmic approaches and has already found a wide range of applications in marketing automation, social networks, computational neuroscience, robotics, and other fields. Our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm; this can take various shapes and thus yield different answers to key issues regarding agency. The paper offers a fresh look at the concept of "Machine Intelligence", which exposes certain vulnerabilities in its current legal interpretation. Most importantly, it further helps us to explore whether the argument for "artificial personhood" holds any water. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of Human - Machine interaction and can thus serve as a point of reference for outlining distinct rights and obligations of the programmer and the consumer: driverless cars are used as a case study to explore the several layers of human and machine interaction. These different degrees of automation reflect various levels of complexities in the underlying algorithms, and pose very interesting questions in terms of agency and dynamic tasks carried out by software agents. Part 2 further discusses the intricate nature of the underlying algorithms and artificial neural networks (ANN) that implement them and considers how one can interpret and utilize observed patterns in acquired data. Is "artificial personhood" a sufficient legal response to highly sophisticated machine learning techniques employed in decision making that successfully emulate or even enhance human cognitive abilities
Methods for Structural Pattern Recognition: Complexity and Applications
Katedra kybernetik
- …