103 research outputs found
Analysing the behaviour of robot teams through relational sequential pattern mining
This report outlines the use of a relational representation in a Multi-Agent
domain to model the behaviour of the whole system. A desired property in this
systems is the ability of the team members to work together to achieve a common
goal in a cooperative manner. The aim is to define a systematic method to
verify the effective collaboration among the members of a team and comparing
the different multi-agent behaviours. Using external observations of a
Multi-Agent System to analyse, model, recognize agent behaviour could be very
useful to direct team actions. In particular, this report focuses on the
challenge of autonomous unsupervised sequential learning of the team's
behaviour from observations. Our approach allows to learn a symbolic sequence
(a relational representation) to translate raw multi-agent, multi-variate
observations of a dynamic, complex environment, into a set of sequential
behaviours that are characteristic of the team in question, represented by a
set of sequences expressed in first-order logic atoms. We propose to use a
relational learning algorithm to mine meaningful frequent patterns among the
relational sequences to characterise team behaviours. We compared the
performance of two teams in the RoboCup four-legged league environment, that
have a very different approach to the game. One uses a Case Based Reasoning
approach, the other uses a pure reactive behaviour.Comment: 25 page
Succinct Representations for Abstract Interpretation
Abstract interpretation techniques can be made more precise by distinguishing
paths inside loops, at the expense of possibly exponential complexity.
SMT-solving techniques and sparse representations of paths and sets of paths
avoid this pitfall. We improve previously proposed techniques for guided static
analysis and the generation of disjunctive invariants by combining them with
techniques for succinct representations of paths and symbolic representations
for transitions based on static single assignment. Because of the
non-monotonicity of the results of abstract interpretation with widening
operators, it is difficult to conclude that some abstraction is more precise
than another based on theoretical local precision results. We thus conducted
extensive comparisons between our new techniques and previous ones, on a
variety of open-source packages.Comment: Static analysis symposium (SAS), Deauville : France (2012
LLHD: A Multi-level Intermediate Representation for Hardware Description Languages
Modern Hardware Description Languages (HDLs) such as SystemVerilog or VHDL
are, due to their sheer complexity, insufficient to transport designs through
modern circuit design flows. Instead, each design automation tool lowers HDLs
to its own Intermediate Representation (IR). These tools are monolithic and
mostly proprietary, disagree in their implementation of HDLs, and while many
redundant IRs exists, no IR today can be used through the entire circuit design
flow. To solve this problem, we propose the LLHD multi-level IR. LLHD is
designed as simple, unambiguous reference description of a digital circuit, yet
fully captures existing HDLs. We show this with our reference compiler on
designs as complex as full CPU cores. LLHD comes with lowering passes to a
hardware-near structural IR, which readily integrates with existing tools. LLHD
establishes the basis for innovation in HDLs and tools without redundant
compilers or disjoint IRs. For instance, we implement an LLHD simulator that
runs up to 2.4x faster than commercial simulators but produces equivalent,
cycle-accurate results. An initial vertically-integrated research prototype is
capable of representing all levels of the IR, implements lowering from the
behavioural to the structural IR, and covers a sufficient subset of
SystemVerilog to support a full CPU design
Top-down and bottom-up modulation in processing bimodal face/voice stimuli
<p>Abstract</p> <p>Background</p> <p>Processing of multimodal information is a critical capacity of the human brain, with classic studies showing bimodal stimulation either facilitating or interfering in perceptual processing. Comparing activity to congruent and incongruent bimodal stimuli can reveal sensory dominance in particular cognitive tasks.</p> <p>Results</p> <p>We investigated audiovisual interactions driven by stimulus properties (bottom-up influences) or by task (top-down influences) on congruent and incongruent simultaneously presented faces and voices while ERPs were recorded. Subjects performed gender categorisation, directing attention either to faces or to voices and also judged whether the face/voice stimuli were congruent in terms of gender. Behaviourally, the unattended modality affected processing in the attended modality: the disruption was greater for attended voices. ERPs revealed top-down modulations of early brain processing (30-100 ms) over unisensory cortices. No effects were found on N170 or VPP, but from 180-230 ms larger right frontal activity was seen for incongruent than congruent stimuli.</p> <p>Conclusions</p> <p>Our data demonstrates that in a gender categorisation task the processing of faces dominate over the processing of voices. Brain activity showed different modulation by top-down and bottom-up information. Top-down influences modulated early brain activity whereas bottom-up interactions occurred relatively late.</p
- …