1,532 research outputs found
Hybrid robust deep and shallow semantic processing for creativity support in document production
The research performed in the DeepThought project (http://www.project-deepthought.net) aims at demonstrating the potential of deep linguistic processing if added to existing shallow methods that ensure robustness. Classical information retrieval is extended by high precision concept indexing and relation detection. We use this approach to demonstrate the feasibility of three ambitious applications, one of which is a tool for creativity support in document production and collective brainstorming. This application is described in detail in this paper. Common to all three applications, and the basis for their development is a platform for integrated linguistic processing. This platform is based on a generic software architecture that combines multiple NLP components and on robust minimal recursive semantics (RMRS) as a uniform representation language
On high behavior of the pion form factor for transitions and within the nonlocal quark-pion model
The behavior of the transition pion form factor for processes \gamma^*\gamma
-> \pi^0 and \gamma^* \gamma^* -> \pi^0 at large values of space-like photon
momenta is estimated within the nonlocal covariant quark-pion model. It is
shown that, in general, the coefficient of the leading asymptotic term depends
dynamically on the ratio of the constituent quark mass and the average
virtuality of quarks in the vacuum and kinematically on the ratio of photon
virtualities. The kinematic dependence of the transition form factor allows us
to obtain the relation between the pion light-cone distribution amplitude and
the quark-pion vertex function. The dynamic dependence indicates that the
transition form factor \gamma^* \gamma -> \pi^0 at high momentum transfers is
very sensitive to the nonlocality size of nonperturbative fluctuations in the
QCD vacuum.Comment: LaTex file with 3 ps-figure
Light-cone distribution amplitudes of the baryon octet
We present results of the first ab initio lattice QCD calculation of the
normalization constants and first moments of the leading twist distribution
amplitudes of the full baryon octet, corresponding to the small transverse
distance limit of the associated S-wave light-cone wave functions. The P-wave
(higher twist) normalization constants are evaluated as well. The calculation
is done using flavors of dynamical (clover) fermions on lattices of
different volumes and pion masses down to 222 MeV. Significant SU(3) flavor
symmetry violation effects in the shape of the distribution amplitudes are
observed.Comment: Update to the version published in JHE
Armillifer armillatus Pentastomiasis in African Immigrant, Germany
No abstract available
Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks
Recurrent neural networks (RNNs) in the brain and in silico excel at solving
tasks with intricate temporal dependencies. Long timescales required for
solving such tasks can arise from properties of individual neurons
(single-neuron timescale, , e.g., membrane time constant in biological
neurons) or recurrent interactions among them (network-mediated timescale).
However, the contribution of each mechanism for optimally solving
memory-dependent tasks remains poorly understood. Here, we train RNNs to solve
-parity and -delayed match-to-sample tasks with increasing memory
requirements controlled by by simultaneously optimizing recurrent weights
and s. We find that for both tasks RNNs develop longer timescales with
increasing , but depending on the learning objective, they use different
mechanisms. Two distinct curricula define learning objectives: sequential
learning of a single- (single-head) or simultaneous learning of multiple
s (multi-head). Single-head networks increase their with and are
able to solve tasks for large , but they suffer from catastrophic
forgetting. However, multi-head networks, which are explicitly required to hold
multiple concurrent memories, keep constant and develop longer
timescales through recurrent connectivity. Moreover, we show that the
multi-head curriculum increases training speed and network stability to
ablations and perturbations, and allows RNNs to generalize better to tasks
beyond their training regime. This curriculum also significantly improves
training GRUs and LSTMs for large- tasks. Our results suggest that adapting
timescales to task requirements via recurrent interactions allows learning more
complex objectives and improves the RNN's performance
Recommended from our members
Regulation of the Activity in the p53 Family Depends on the Organization of the Transactivation Domain.
Despite high sequence homology among the p53 family members, the regulation of their transactivation potential is based on strikingly different mechanisms. Previous studies revealed that the activity of TAp63α is regulated via an autoinhibitory mechanism that keeps inactive TAp63α in a dimeric conformation. While all p73 isoforms are constitutive tetramers, their basal activity is much lower compared with tetrameric TAp63. We show that the dimeric state of TAp63α not only reduces DNA binding affinity, but also suppresses interaction with the acetyltransferase p300. Exchange of the transactivation domains is sufficient to transfer the regulatory characteristics between p63 and p73. Structure determination of the transactivation domains of p63 and p73 in complex with the p300 Taz2 domain further revealed that, in contrast to p53 and p73, p63 has a single transactivation domain. Sequences essential for stabilizing the closed dimer of TAp63α have evolved into a second transactivation domain in p73 and p53.The research was funded by the DFG (DO 545/8 and DO 545/13), the Center for Biomolecular Magnetic Resonance (BMRZ), and the Cluster of Excellence Frankfurt (Macromolecular Complexes). M.T. was supported by a fellowship from the Fonds of the Chemical Industry
- …