12,020 research outputs found
Dynamic Positional Trees for Structural Image Analysis
Abstract Dynamic positional trees are a significant extension of dynamic trees, incorporating movable nodes. This addition makes sequence tracking viable within the model, but requires a new formulation to incorporate the prior over positions. The model is implemented using a structured variational procedure, and is illustrated on synthetic raytraced images and image sequences. We consider the problem of structural image analysis and in particular the inference of scene properties from image data. We are especially concerned with image decomposition, that is obtaining the characteristic parts of an image and the relationships between them. The components of an image are not independent of each other; certain objects are expected to occur together, and objects are made up of different subcomponents. One way of thinking of this problem is by analogy with parsing a language; we are interested in parsing images. However, the important characteristics and structure in an image is significantly different from linguistic data. Those familiar with work on dynamic trees will be aware that they have been developed in the context of single static images [15, 1, 13]. It would be desirable if the benefits of the dynamic tree approach could also be made available for image sequences. Introducing a sequence model into the basic dynamic tree formalism is not straightforward as a change in the position of an object is reflected in a change in the connectivity structure of the dynamic tree. This change would be hard to predict from the previous time slice and would be an inelegant representation of the dynamics: the connectivity structure is supposed to represent the structural characteristics of an object, most of which will be preserved during movement. Here the dynamic tree is modified to incorporate position variables, resulting in a model where object movement can be represented in terms of a change in position components of the nodes representing that object
Desirable properties for XML update mechanisms
The adoption of XML as the default data interchange format and the standardisation of the XPath and XQuery languages has resulted in significant research in the development and implementation of XML databases capable of processing queries efficiently. The ever-increasing deployment of XML in industry and the real-world requirement to support efficient updates to XML documents has more recently prompted research in dynamic XML labelling schemes. In this paper, we provide an overview of the recent research in dynamic XML labelling schemes. Our motivation is to define a set of properties that represent a more holistic dynamic labelling scheme and present our findings through an evaluation matrix for most of the existing schemes that provide update functionality
Code Prediction by Feeding Trees to Transformers
We advance the state-of-the-art in the accuracy of code prediction (next
token prediction) used in autocomplete systems. First, we report that using the
recently proposed Transformer architecture even out-of-the-box outperforms
previous neural and non-neural systems for code prediction. We then show that
by making the Transformer architecture aware of the syntactic structure of
code, we further increase the margin by which a Transformer-based system
outperforms previous systems. With this, it outperforms the accuracy of an
RNN-based system (similar to Hellendoorn et al. 2018) by 18.3\%, the Deep3
system (Raychev et al 2016) by 14.1\%, and an adaptation of Code2Seq (Alon et
al., 2018) for code prediction by 14.4\%.
We present in the paper several ways of communicating the code structure to
the Transformer, which is fundamentally built for processing sequence data. We
provide a comprehensive experimental evaluation of our proposal, along with
alternative design choices, on a standard Python dataset, as well as on a
Facebook internal Python corpus. Our code and data preparation pipeline will be
available in open source
Complete RNA inverse folding: computational design of functional hammerhead ribozymes
Nanotechnology and synthetic biology currently constitute one of the most
innovative, interdisciplinary fields of research, poised to radically transform
society in the 21st century. This paper concerns the synthetic design of
ribonucleic acid molecules, using our recent algorithm, RNAiFold, which can
determine all RNA sequences whose minimum free energy secondary structure is a
user-specified target structure. Using RNAiFold, we design ten cis-cleaving
hammerhead ribozymes, all of which are shown to be functional by a cleavage
assay. We additionally use RNAiFold to design a functional cis-cleaving
hammerhead as a modular unit of a synthetic larger RNA. Analysis of kinetics on
this small set of hammerheads suggests that cleavage rate of computationally
designed ribozymes may be correlated with positional entropy, ensemble defect,
structural flexibility/rigidity and related measures. Artificial ribozymes have
been designed in the past either manually or by SELEX (Systematic Evolution of
Ligands by Exponential Enrichment); however, this appears to be the first
purely computational design and experimental validation of novel functional
ribozymes. RNAiFold is available at
http://bioinformatics.bc.edu/clotelab/RNAiFold/.Comment: 17 pages, 2 tables, 7 figures, final version to appear in Nucleic
Acids Researc
Tensor Product Generation Networks for Deep NLP Modeling
We present a new approach to the design of deep networks for natural language
processing (NLP), based on the general technique of Tensor Product
Representations (TPRs) for encoding and processing symbol structures in
distributed neural networks. A network architecture --- the Tensor Product
Generation Network (TPGN) --- is proposed which is capable in principle of
carrying out TPR computation, but which uses unconstrained deep learning to
design its internal representations. Instantiated in a model for image-caption
generation, TPGN outperforms LSTM baselines when evaluated on the COCO dataset.
The TPR-capable structure enables interpretation of internal representations
and operations, which prove to contain considerable grammatical content. Our
caption-generation model can be interpreted as generating sequences of
grammatical categories and retrieving words by their categories from a plan
encoded as a distributed representation
- …