32,645 research outputs found
FPGA-accelerated machine learning inference as a service for particle physics computing
New heterogeneous computing paradigms on dedicated hardware with increased
parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting
solutions with large potential gains. The growing applications of machine
learning algorithms in particle physics for simulation, reconstruction, and
analysis are naturally deployed on such platforms. We demonstrate that the
acceleration of machine learning inference as a web service represents a
heterogeneous computing solution for particle physics experiments that
potentially requires minimal modification to the current computing model. As
examples, we retrain the ResNet-50 convolutional neural network to demonstrate
state-of-the-art performance for top quark jet tagging at the LHC and apply a
ResNet-50 model with transfer learning for neutrino event classification. Using
Project Brainwave by Microsoft to accelerate the ResNet-50 image classification
model, we achieve average inference times of 60 (10) milliseconds with our
experimental physics software framework using Brainwave as a cloud (edge or
on-premises) service, representing an improvement by a factor of approximately
30 (175) in model inference latency over traditional CPU inference in current
experimental hardware. A single FPGA service accessed by many CPUs achieves a
throughput of 600--700 inferences per second using an image batch of one,
comparable to large batch-size GPU throughput and significantly better than
small batch-size GPU throughput. Deployed as an edge or cloud service for the
particle physics computing model, coprocessor accelerators can have a higher
duty cycle and are potentially much more cost-effective.Comment: 16 pages, 14 figures, 2 table
The EPICS Software Framework Moves from Controls to Physics
The Experimental Physics and Industrial Control System (EPICS), is an open-source software framework for high-performance distributed control, and is at the heart of many of the world’s large accelerators and telescopes. Recently, EPICS has undergone a major revision, with the aim of better computing supporting for the next generation of machines and analytical tools. Many new data types, such as matrices, tables, images, and statistical descriptions, plus users’ own data types, now supplement the simple scalar and waveform types of the former EPICS. New computational architectures for scientific computing have been added for high-performance data processing services and pipelining. Python and Java bindings have enabled powerful new user interfaces. The result has been that controls are now being integrated with modelling and simulation, machine learning, enterprise databases, and experiment DAQs. We introduce this new EPICS (version 7) from the perspective of accelerator physics and review early adoption cases in accelerators around the world
Architecture, design and source code comparison of ns-2 and ns-3 network simulators
Ns-2 and its successor ns-3 are discrete-event simulators. Ns-
3 is still under development, but offers some interesting characteristics
for developers while ns-2 still has a big user base.
This paper remarks current differences between both tools
from developers point of view. Leaving performance and resources
consumption aside, technical issues described in the
present paper might help to choose one or another alternative
depending of simulation and project management requirements.Ministerio de Educación y Ciencia TIN2006-15617-C03-03Junta de Andalucía P06-TIC-229
ClovaCall: Korean Goal-Oriented Dialog Speech Corpus for Automatic Speech Recognition of Contact Centers
Automatic speech recognition (ASR) via call is essential for various
applications, including AI for contact center (AICC) services. Despite the
advancement of ASR, however, most publicly available call-based speech corpora
such as Switchboard are old-fashioned. Also, most existing call corpora are in
English and mainly focus on open domain dialog or general scenarios such as
audiobooks. Here we introduce a new large-scale Korean call-based speech corpus
under a goal-oriented dialog scenario from more than 11,000 people, i.e.,
ClovaCall corpus. ClovaCall includes approximately 60,000 pairs of a short
sentence and its corresponding spoken utterance in a restaurant reservation
domain. We validate the effectiveness of our dataset with intensive experiments
using two standard ASR models. Furthermore, we release our ClovaCall dataset
and baseline source codes to be available via
https://github.com/ClovaAI/ClovaCall.Comment: 5 pages, 2 figures, 4 tables, The first two authors equally
contributed to this wor
Uncertainty Analysis for Data-Driven Chance-Constrained Optimization
In this contribution our developed framework for data-driven chance-constrained optimization is extended with an uncertainty analysis module. The module quantifies uncertainty in output variables of rigorous simulations. It chooses the most accurate parametric continuous probability distribution model, minimizing deviation between model and data. A constraint is added to favour less complex models with a minimal required quality regarding the fit. The bases of the module are over 100 probability distribution models provided in the Scipy package in Python, a rigorous case-study is conducted selecting the four most relevant models for the application at hand. The applicability and precision of the uncertainty analyser module is investigated for an impact factor calculation in life cycle impact assessment to quantify the uncertainty in the results. Furthermore, the extended framework is verified with data from a first principle process model of a chloralkali plant, demonstrating the increased precision of the uncertainty description of the output variables, resulting in 25% increase in accuracy in the chance-constraint calculation.BMWi, 0350013A, ChemEFlex - Umsetzbarkeitsanalyse zur Lastflexibilisierung elektrochemischer Verfahren in der Industrie; Teilvorhaben: Modellierung der Chlor-Alkali-Elektrolyse sowie anderer Prozesse und deren Bewertung hinsichtlich Wirtschaftlichkeit und möglicher HemmnisseDFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische Universität Berli
SYGMA: Stellar Yields for Galactic Modeling Applications
The stellar yields for galactic modeling applications (SYGMA) code is an
open-source module that models the chemical ejecta and feedback of simple
stellar populations (SSPs). It is intended for use in hydrodynamical
simulations and semi-analytic models of galactic chemical evolution. The module
includes the enrichment from asymptotic giant branch (AGB) stars, massive
stars, SNIa and neutron-star mergers. An extensive and extendable stellar
yields library includes the NuGrid yields with all elements and many isotopes
up to Bi. Stellar feedback from mechanic and frequency-dependent radiative
luminosities are computed based on NuGrid stellar models and their synthetic
spectra. The module further allows for customizable initial-mass functions and
supernova Ia (SNIa) delay-time distributions to calculate time-dependent ejecta
based on stellar yield input. A variety of r-process sites can be included. A
comparison of SSP ejecta based on NuGrid yields with those from Portinari et
al. (1998) and Marigo (2001) reveals up to a factor of 3.5 and 4.8 less C and N
enrichment from AGB stars at low metallicity, a result we attribute to NuGrid's
modeling of hot-bottom burning. Different core-collapse supernova explosion and
fallback prescriptions may lead to substantial variations for the accumulated
ejecta of C, O and Si in the first at . An online
interface of the open-source SYGMA module enables interactive simulations,
analysis and data extraction of the evolution of all species formed by the
evolution of simple stellar populations.Comment: 18 pages, 10 figures, 3 tables, published in ApJ
- …
