21,103 research outputs found
Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience
Physical symbol systems are needed for open-ended cognition. A good way to
understand physical symbol systems is by comparison of thought to chemistry.
Both have systematicity, productivity and compositionality. The state of the
art in cognitive architectures for open-ended cognition is critically assessed.
I conclude that a cognitive architecture that evolves symbol structures in the
brain is a promising candidate to explain open-ended cognition. Part 2 of the
paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living
Machines 2013 Natural History Museum, Londo
Automated Mapping of UML Activity Diagrams to Formal Specifications for Supporting Containment Checking
Business analysts and domain experts are often sketching the behaviors of a
software system using high-level models that are technology- and
platform-independent. The developers will refine and enrich these high-level
models with technical details. As a consequence, the refined models can deviate
from the original models over time, especially when the two kinds of models
evolve independently. In this context, we focus on behavior models; that is, we
aim to ensure that the refined, low-level behavior models conform to the
corresponding high-level behavior models. Based on existing formal verification
techniques, we propose containment checking as a means to assess whether the
system's behaviors described by the low-level models satisfy what has been
specified in the high-level counterparts. One of the major obstacles is how to
lessen the burden of creating formal specifications of the behavior models as
well as consistency constraints, which is a tedious and error-prone task when
done manually. Our approach presented in this paper aims at alleviating the
aforementioned challenges by considering the behavior models as verification
inputs and devising automated mappings of behavior models onto formal
properties and descriptions that can be directly used by model checkers. We
discuss various challenges in our approach and show the applicability of our
approach in illustrative scenarios.Comment: In Proceedings FESCA 2014, arXiv:1404.043
Empirical Potential Function for Simplified Protein Models: Combining Contact and Local Sequence-Structure Descriptors
An effective potential function is critical for protein structure prediction
and folding simulation. Simplified protein models such as those requiring only
or backbone atoms are attractive because they enable efficient
search of the conformational space. We show residue specific reduced discrete
state models can represent the backbone conformations of proteins with small
RMSD values. However, no potential functions exist that are designed for such
simplified protein models. In this study, we develop optimal potential
functions by combining contact interaction descriptors and local
sequence-structure descriptors. The form of the potential function is a
weighted linear sum of all descriptors, and the optimal weight coefficients are
obtained through optimization using both native and decoy structures. The
performance of the potential function in test of discriminating native protein
structures from decoys is evaluated using several benchmark decoy sets. Our
potential function requiring only backbone atoms or atoms have
comparable or better performance than several residue-based potential functions
that require additional coordinates of side chain centers or coordinates of all
side chain atoms. By reducing the residue alphabets down to size 5 for local
structure-sequence relationship, the performance of the potential function can
be further improved. Our results also suggest that local sequence-structure
correlation may play important role in reducing the entropic cost of protein
folding.Comment: 20 pages, 5 figures, 4 tables. In press, Protein
Recommended from our members
A short survey of discourse representation models
With the advancement of technology and the wide adoption of ontologies as knowledge representation formats, in the last decade, a handful of models were proposed for the externalization of the rhetoric and argumentation captured within scientific publications. Conceptually, most of these models share a similar representation form of the scientific publication, i.e. as a series of interconnected elementary knowledge items. The main differences are given by the terminology used, the types of rhetorical and/or argumentation relations connecting the knowledge items and the foundational theories supporting these relations. This paper analyzes the state of the art and provides a concise comparative overview of the five most prominent discourse representation models, with the goal of sketching an unified model for discourse representation
Array languages and the N-body problem
This paper is a description of the contributions to the SICSA multicore challenge on many body
planetary simulation made by a compiler group at the University of Glasgow. Our group is part of
the Computer Vision and Graphics research group and we have for some years been developing array
compilers because we think these are a good tool both for expressing graphics algorithms and for
exploiting the parallelism that computer vision applications require.
We shall describe experiments using two languages on two different platforms and we shall compare
the performance of these with reference C implementations running on the same platforms. Finally
we shall draw conclusions both about the viability of the array language approach as compared to
other approaches used in the challenge and also about the strengths and weaknesses of the two, very
different, processor architectures we used
Understanding Hidden Memories of Recurrent Neural Networks
Recurrent neural networks (RNNs) have been successfully applied to various
natural language processing (NLP) tasks and achieved better results than
conventional methods. However, the lack of understanding of the mechanisms
behind their effectiveness limits further improvements on their architectures.
In this paper, we present a visual analytics method for understanding and
comparing RNN models for NLP tasks. We propose a technique to explain the
function of individual hidden state units based on their expected response to
input texts. We then co-cluster hidden state units and words based on the
expected response and visualize co-clustering results as memory chips and word
clouds to provide more structured knowledge on RNNs' hidden states. We also
propose a glyph-based sequence visualization based on aggregate information to
analyze the behavior of an RNN's hidden state at the sentence-level. The
usability and effectiveness of our method are demonstrated through case studies
and reviews from domain experts.Comment: Published at IEEE Conference on Visual Analytics Science and
Technology (IEEE VAST 2017
- …