6,447 research outputs found
Traditional Wisdom and Monte Carlo Tree Search Face-to-Face in the Card Game Scopone
We present the design of a competitive artificial intelligence for Scopone, a
popular Italian card game. We compare rule-based players using the most
established strategies (one for beginners and two for advanced players) against
players using Monte Carlo Tree Search (MCTS) and Information Set Monte Carlo
Tree Search (ISMCTS) with different reward functions and simulation strategies.
MCTS requires complete information about the game state and thus implements a
cheating player while ISMCTS can deal with incomplete information and thus
implements a fair player. Our results show that, as expected, the cheating MCTS
outperforms all the other strategies; ISMCTS is stronger than all the
rule-based players implementing well-known and most advanced strategies and it
also turns out to be a challenging opponent for human players.Comment: Preprint. Accepted for publication in the IEEE Transaction on Game
Seesaw Scale in the Minimal Renormalizable SO(10) Grand Unification
Simple SO(10) Higgs models with the adjoint representation triggering the
grand-unified symmetry breaking, discarded a long ago due to inherent
tree-level tachyonic instabilities in the physically interesting scenarios,
have been recently brought back to life by quantum effects. In this work we
focus on the variant with 45_H+126_H in the Higgs sector and show that there
are several regions in the parameter space of this model that can support
stable unifying configurations with the B-L breaking scale as high as 10^14
GeV, well above the previous generic estimates based on the minimal survival
hypothesis. This admits for a renormalizable implementation of the canonical
seesaw and makes the simplest potentially realistic scenario of this kind a
good candidate for a minimal SO(10) grand unification. Last, but not least,
this setting is likely to be extensively testable at future large-volume
facilities such as Hyper-Kamiokande.Comment: 21 pages, 9 figures, 5 table
Structure and prospects of the simplest SO(10) GUTs
We recapitulate the latest results on the class of the simplest SO(10) grand
unified models in which the GUT-scale symmetry breaking is triggered by an
adjoint Higgs representation. We argue that the minimal survival approximation
traditionally used in the GUT- and seesaw-scale estimates tends to be blind to
very interesting parts of the parameter space in which some of the
intermediate-scale states necessary for non-supersymmetric unification of the
SM gauge couplings can be as light as to leave their imprints in the TeV
domain. The stringent minimal-survival-based estimates of the B-L scale are
shown to be relaxed by as much as four orders of magnitude, thus admitting for
a consistent implementation of the standard seesaw mechanism even without
excessive fine-tuning implied by the previous studies. The prospects of the
minimal renormalizable SO(10) GUT as a potential candidate for a
well-calculable theory of proton decay are discussed in brief.Comment: 9 pages, 6 figures; to appear in the proceedings of the CETUP'12
worksho
Towards formal models and languages for verifiable Multi-Robot Systems
Incorrect operations of a Multi-Robot System (MRS) may not only lead to
unsatisfactory results, but can also cause economic losses and threats to
safety. These threats may not always be apparent, since they may arise as
unforeseen consequences of the interactions between elements of the system.
This call for tools and techniques that can help in providing guarantees about
MRSs behaviour. We think that, whenever possible, these guarantees should be
backed up by formal proofs to complement traditional approaches based on
testing and simulation.
We believe that tailored linguistic support to specify MRSs is a major step
towards this goal. In particular, reducing the gap between typical features of
an MRS and the level of abstraction of the linguistic primitives would simplify
both the specification of these systems and the verification of their
properties. In this work, we review different agent-oriented languages and
their features; we then consider a selection of case studies of interest and
implement them useing the surveyed languages. We also evaluate and compare
effectiveness of the proposed solution, considering, in particular, easiness of
expressing non-trivial behaviour.Comment: Changed formattin
Validation of a software dependability tool via fault injection experiments
Presents the validation of the strategies employed in the RECCO tool to analyze a C/C++ software; the RECCO compiler scans C/C++ source code to extract information about the significance of the variables that populate the program and the code structure itself. Experimental results gathered on an Open Source Router are used to compare and correlate two sets of critical variables, one obtained by fault injection experiments, and the other applying the RECCO tool, respectively. Then the two sets are analyzed, compared, and correlated to prove the effectiveness of RECCO's methodology
Software dependability techniques validated via fault injection experiments
The present paper proposes a C/C++ source-to-source compiler able to increase the dependability properties of a given application. The adopted strategy is based on two main techniques: variable duplication/triplication and control flow checking. The validation of these techniques is based on the emulation of fault appearance by software fault injection. The chosen test case is a client-server application in charge of calculating and drawing a Mandelbrot fracta
Data criticality estimation in software applications
In safety-critical applications it is often possible to exploit software techniques to increase system's fault- tolerance. Common approaches are based on data redundancy to prevent data corruption during the software execution. Duplicating most critical variables only can significantly reduce the memory and performance overheads, while still guaranteeing very good results in terms of fault-tolerance improvement. This paper presents a new methodology to compute the criticality of variables in target software applications. Instead of resorting to time consuming fault injection experiments, the proposed solution is based on the run- time analysis of the variables' behavior logged during the execution of the target application under different workloads
Control-flow checking via regular expressions
The present paper explains a new approach to program control flow checking. The check has been inserted at source-code level using a signature methodology based on regular expressions. The signature checking is performed without a dedicated watchdog processor but resorting to inter-process communication (IPC) facilities offered by most of the modern operating systems. The proposed approach allows very low memory overhead and trade-off between fault latency and program execution time overhead
Differential gene expression graphs: A data structure for classification in DNA microarrays
This paper proposes an innovative data structure to be used as a backbone in designing microarray phenotype sample classifiers. The data structure is based on graphs and it is built from a differential analysis of the expression levels of healthy and diseased tissue samples in a microarray dataset. The proposed data structure is built in such a way that, by construction, it shows a number of properties that are perfectly suited to address several problems like feature extraction, clustering, and classificatio
- …