28,354 research outputs found
Benchmarking CPUs and GPUs on embedded platforms for software receiver usage
Smartphones containing multi-core central processing units (CPUs) and powerful many-core graphics processing units (GPUs) bring supercomputing technology into your pocket (or into our embedded devices). This can be exploited to produce power-efficient, customized receivers with flexible correlation schemes and more advanced positioning techniques. For example, promising techniques such as the Direct Position Estimation paradigm or usage of tracking solutions based on particle filtering, seem to be very appealing in challenging environments but are likewise computationally quite demanding. This article sheds some light onto recent embedded processor developments, benchmarks Fast Fourier Transform (FFT) and correlation algorithms on representative embedded platforms and relates the results to the use in GNSS software radios. The use of embedded CPUs for signal tracking seems to be straight forward, but more research is required to fully achieve the nominal peak performance of an embedded GPU for FFT computation. Also the electrical power consumption is measured in certain load levels.Peer ReviewedPostprint (published version
Constraining compressed versions of MUED and MSSM using soft tracks at the LHC
A compressed spectrum is an anticipated hideout for many beyond standard
model scenarios. Such a spectrum naturally arises in the minimal universal
extra dimension framework and also in supersymmetric scenarios. Low
leptons and jets are characteristic features of such situations. Hence, a
monojet with has been the conventional signal at the Large Hadron
Collider (LHC). However, we stress that inclusion of -binned track
observables from such soft objects provide very efficient discrimination of new
physics signals against various SM backgrounds. We consider two benchmark
points each for minimal universal extra dimension (MUED) and minimal
supersymmetric standard model (MSSM) scenarios. We perform a detailed cut-based
and multivariate analysis (MVA) to show that the new physics parameter space
can be probed in the ongoing run of LHC at 13 TeV center-of-mass energy with an
integrated luminosity 20-50 fb. When studied in conjunction with
the dark matter relic density constraint assuming standard cosmology, we find
that compressed MUED (with ) can be already excluded from the
existing data. Also, MVA turns out to be a better technique than regular
cut-based analysis since tracks provide uncorrelated observables which would
extract more information from an event.Comment: 26 pages, 7 figures. Minor modifications in the text, references
added, accepted for publication in JHE
2016 Annual Impact Investor Survey
The sixth edition of the Annual Impact Investor Survey is based on an analysis of the activities of 158 of the world's leading impact investing organizations, including fund managers, foundations, banks, development finance institutions, family offices, pension funds, and insurance companies. The survey provides detailed insight into investor perceptions and a number of key market variables such as types of investors, the number and size of investments made, target returns, attitudes towards liquidity and responsible exits, and impact measurement practices. This "State of the Market" analysis explores how investments continue to be made across different geographies, a range of sectors, and multiple asset classes, signaling continued market growth and an increasing interest in impact investing opportunities. J.P. Morgan is an anchor sponsor of the 2016 survey. The study was also produced with support from the U.K. Government through the Department for International Development's Impact Programme
A Two-Tiered Correlation of Dark Matter with Missing Transverse Energy: Reconstructing the Lightest Supersymmetric Particle Mass at the LHC
We suggest that non-trivial correlations between the dark matter particle
mass and collider based probes of missing transverse energy H_T^miss may
facilitate a two tiered approach to the initial discovery of supersymmetry and
the subsequent reconstruction of the LSP mass at the LHC. These correlations
are demonstrated via extensive Monte Carlo simulation of seventeen benchmark
models, each sampled at five distinct LHC center-of-mass beam energies,
spanning the parameter space of No-Scale F-SU(5).This construction is defined
in turn by the union of the Flipped SU(5) Grand Unified Theory, two pairs of
hypothetical TeV scale vector-like supersymmetric multiplets with origins in
F-theory, and the dynamically established boundary conditions of No-Scale
Supergravity. In addition, we consider a control sample comprised of a standard
minimal Supergravity benchmark point. Led by a striking similarity between the
H_T^miss distribution and the familiar power spectrum of a black body radiator
at various temperatures, we implement a broad empirical fit of our simulation
against a Poisson distribution ansatz. We advance the resulting fit as a
theoretical blueprint for deducing the mass of the LSP, utilizing only the
missing transverse energy in a statistical sampling of >= 9 jet events.
Cumulative uncertainties central to the method subsist at a satisfactory 12-15%
level. The fact that supersymmetric particle spectrum of No-Scale F-SU(5) has
thrived the withering onslaught of early LHC data that is steadily decimating
the Constrained Minimal Supersymmetric Standard Model and minimal Supergravity
parameter spaces is a prime motivation for augmenting more conventional LSP
search methodologies with the presently proposed alternative.Comment: JHEP version, 17 pages, 9 Figures, 2 Table
Garbage collection auto-tuning for Java MapReduce on Multi-Cores
MapReduce has been widely accepted as a simple programming pattern that can form the basis for efficient, large-scale, distributed data processing. The success of the MapReduce pattern has led to a variety of implementations for different computational scenarios. In this paper we present MRJ, a MapReduce Java framework for multi-core architectures. We evaluate its scalability on a four-core, hyperthreaded Intel Core i7 processor, using a set of standard MapReduce benchmarks. We investigate the significant impact that Java runtime garbage collection has on the performance and scalability of MRJ. We propose the use of memory management auto-tuning techniques based on machine learning. With our auto-tuning approach, we are able to achieve MRJ performance within 10% of optimal on 75% of our benchmark tests
MLPerf Inference Benchmark
Machine-learning (ML) hardware and software system demand is burgeoning.
Driven by ML applications, the number of different ML inference systems has
exploded. Over 100 organizations are building ML inference chips, and the
systems that incorporate existing models span at least three orders of
magnitude in power consumption and five orders of magnitude in performance;
they range from embedded devices to data-center solutions. Fueling the hardware
are a dozen or more software frameworks and libraries. The myriad combinations
of ML hardware and ML software make assessing ML-system performance in an
architecture-neutral, representative, and reproducible manner challenging.
There is a clear need for industry-wide standard ML benchmarking and evaluation
criteria. MLPerf Inference answers that call. In this paper, we present our
benchmarking method for evaluating ML inference systems. Driven by more than 30
organizations as well as more than 200 ML engineers and practitioners, MLPerf
prescribes a set of rules and best practices to ensure comparability across
systems with wildly differing architectures. The first call for submissions
garnered more than 600 reproducible inference-performance measurements from 14
organizations, representing over 30 systems that showcase a wide range of
capabilities. The submissions attest to the benchmark's flexibility and
adaptability.Comment: ISCA 202
- ā¦