10,980 research outputs found
The Goldman symplectic form on the PGL(V)-Hitchin component
This article is the second of a pair of articles about the Goldman symplectic
form on the PGL(V )-Hitchin component. We show that any ideal triangulation on
a closed connected surface of genus at least 2, and any compatible bridge
system determine a symplectic trivialization of the tangent bundle to the
Hitchin component. Using this, we prove that a large class of flows defined in
the companion paper [SWZ17] are Hamiltonian. We also construct an explicit
collection of Hamiltonian vector fields on the Hitchin component that give a
symplectic basis at every point. These are used to show that the global
coordinate system on the Hitchin component defined iin the companion paper is a
global Darboux coordinate system.Comment: 95 pages, 24 figures, Citations update
Exploring compression techniques for ROOT IO
ROOT provides an flexible format used throughout the HEP community. The
number of use cases - from an archival data format to end-stage analysis - has
required a number of tradeoffs to be exposed to the user. For example, a high
"compression level" in the traditional DEFLATE algorithm will result in a
smaller file (saving disk space) at the cost of slower decompression (costing
CPU time when read). At the scale of the LHC experiment, poor design choices
can result in terabytes of wasted space or wasted CPU time. We explore and
attempt to quantify some of these tradeoffs. Specifically, we explore: the use
of alternate compressing algorithms to optimize for read performance; an
alternate method of compressing individual events to allow efficient random
access; and a new approach to whole-file compression. Quantitative results are
given, as well as guidance on how to make compression decisions for different
use cases.Comment: Proceedings for 22nd International Conference on Computing in High
Energy and Nuclear Physics (CHEP 2016
A Hybrid Neural Network Framework and Application to Radar Automatic Target Recognition
Deep neural networks (DNNs) have found applications in diverse signal
processing (SP) problems. Most efforts either directly adopt the DNN as a
black-box approach to perform certain SP tasks without taking into account of
any known properties of the signal models, or insert a pre-defined SP operator
into a DNN as an add-on data processing stage. This paper presents a novel
hybrid-NN framework in which one or more SP layers are inserted into the DNN
architecture in a coherent manner to enhance the network capability and
efficiency in feature extraction. These SP layers are properly designed to make
good use of the available models and properties of the data. The network
training algorithm of hybrid-NN is designed to actively involve the SP layers
in the learning goal, by simultaneously optimizing both the weights of the DNN
and the unknown tuning parameters of the SP operators. The proposed hybrid-NN
is tested on a radar automatic target recognition (ATR) problem. It achieves
high validation accuracy of 96\% with 5,000 training images in radar ATR.
Compared with ordinary DNN, hybrid-NN can markedly reduce the required amount
of training data and improve the learning performance
- …