21,102 research outputs found
Recommended from our members
Silicon compilation
Silicon compilation is a term used for many different purposes. In this paper we define silicon compilation as a mapping from some higher level description into layout. We define the basic issues in structural and behavioral silicon compilation and some possible solutions to those issues. Finally, we define the concept of an intelligent silicon compiler in which the compiler evaluates the quality of the generated design and attempts to improve it if it is not satisfactory
Merlin: A Language for Provisioning Network Resources
This paper presents Merlin, a new framework for managing resources in
software-defined networks. With Merlin, administrators express high-level
policies using programs in a declarative language. The language includes
logical predicates to identify sets of packets, regular expressions to encode
forwarding paths, and arithmetic formulas to specify bandwidth constraints. The
Merlin compiler uses a combination of advanced techniques to translate these
policies into code that can be executed on network elements including a
constraint solver that allocates bandwidth using parameterizable heuristics. To
facilitate dynamic adaptation, Merlin provides mechanisms for delegating
control of sub-policies and for verifying that modifications made to
sub-policies do not violate global constraints. Experiments demonstrate the
expressiveness and scalability of Merlin on real-world topologies and
applications. Overall, Merlin simplifies network administration by providing
high-level abstractions for specifying network policies and scalable
infrastructure for enforcing them
S+Net: extending functional coordination with extra-functional semantics
This technical report introduces S+Net, a compositional coordination language
for streaming networks with extra-functional semantics. Compositionality
simplifies the specification of complex parallel and distributed applications;
extra-functional semantics allow the application designer to reason about and
control resource usage, performance and fault handling. The key feature of
S+Net is that functional and extra-functional semantics are defined
orthogonally from each other. S+Net can be seen as a simultaneous
simplification and extension of the existing coordination language S-Net, that
gives control of extra-functional behavior to the S-Net programmer. S+Net can
also be seen as a transitional research step between S-Net and AstraKahn,
another coordination language currently being designed at the University of
Hertfordshire. In contrast with AstraKahn which constitutes a re-design from
the ground up, S+Net preserves the basic operational semantics of S-Net and
thus provides an incremental introduction of extra-functional control in an
existing language.Comment: 34 pages, 11 figures, 3 table
Kevoree Modeling Framework (KMF): Efficient modeling techniques for runtime use
The creation of Domain Specific Languages(DSL) counts as one of the main
goals in the field of Model-Driven Software Engineering (MDSE). The main
purpose of these DSLs is to facilitate the manipulation of domain specific
concepts, by providing developers with specific tools for their domain of
expertise. A natural approach to create DSLs is to reuse existing modeling
standards and tools. In this area, the Eclipse Modeling Framework (EMF) has
rapidly become the defacto standard in the MDSE for building Domain Specific
Languages (DSL) and tools based on generative techniques. However, the use of
EMF generated tools in domains like Internet of Things (IoT), Cloud Computing
or Models@Runtime reaches several limitations. In this paper, we identify
several properties the generated tools must comply with to be usable in other
domains than desktop-based software systems. We then challenge EMF on these
properties and describe our approach to overcome the limitations. Our approach,
implemented in the Kevoree Modeling Framework (KMF), is finally evaluated
according to the identified properties and compared to EMF.Comment: ISBN 978-2-87971-131-7; N° TR-SnT-2014-11 (2014
Natural Compression for Distributed Deep Learning
Modern deep learning models are often trained in parallel over a collection
of distributed machines to reduce training time. In such settings,
communication of model updates among machines becomes a significant performance
bottleneck and various lossy update compression techniques have been proposed
to alleviate this problem. In this work, we introduce a new, simple yet
theoretically and practically effective compression technique: {\em natural
compression (NC)}. Our technique is applied individually to all entries of the
to-be-compressed update vector and works by randomized rounding to the nearest
(negative or positive) power of two, which can be computed in a "natural" way
by ignoring the mantissa. We show that compared to no compression, NC increases
the second moment of the compressed vector by not more than the tiny factor
\nicefrac{9}{8}, which means that the effect of NC on the convergence speed
of popular training algorithms, such as distributed SGD, is negligible.
However, the communications savings enabled by NC are substantial, leading to
{\em - improvement in overall theoretical running time}. For
applications requiring more aggressive compression, we generalize NC to {\em
natural dithering}, which we prove is {\em exponentially better} than the
common random dithering technique. Our compression operators can be used on
their own or in combination with existing operators for a more aggressive
combined effect, and offer new state-of-the-art both in theory and practice.Comment: 8 pages, 20 pages of Appendix, 6 Tables, 14 Figure
- …