4,882 research outputs found
EEG-Based User Reaction Time Estimation Using Riemannian Geometry Features
Riemannian geometry has been successfully used in many brain-computer
interface (BCI) classification problems and demonstrated superior performance.
In this paper, for the first time, it is applied to BCI regression problems, an
important category of BCI applications. More specifically, we propose a new
feature extraction approach for Electroencephalogram (EEG) based BCI regression
problems: a spatial filter is first used to increase the signal quality of the
EEG trials and also to reduce the dimensionality of the covariance matrices,
and then Riemannian tangent space features are extracted. We validate the
performance of the proposed approach in reaction time estimation from EEG
signals measured in a large-scale sustained-attention psychomotor vigilance
task, and show that compared with the traditional powerband features, the
tangent space features can reduce the root mean square estimation error by
4.30-8.30%, and increase the estimation correlation coefficient by 6.59-11.13%.Comment: arXiv admin note: text overlap with arXiv:1702.0291
Indoor Positioning for Monitoring Older Adults at Home: Wi-Fi and BLE Technologies in Real Scenarios
This paper presents our experience on a real case of applying an indoor localization system formonitoringolderadultsintheirownhomes. Sincethesystemisdesignedtobeusedbyrealusers, therearemanysituationsthatcannotbecontrolledbysystemdevelopersandcanbeasourceoferrors. This paper presents some of the problems that arise when real non-expert users use localization systems and discusses some strategies to deal with such situations. Two technologies were tested to provide indoor localization: Wi-Fi and Bluetooth Low Energy. The results shown in the paper suggest that the Bluetooth Low Energy based one is preferable in the proposed task
MLPerf Inference Benchmark
Machine-learning (ML) hardware and software system demand is burgeoning.
Driven by ML applications, the number of different ML inference systems has
exploded. Over 100 organizations are building ML inference chips, and the
systems that incorporate existing models span at least three orders of
magnitude in power consumption and five orders of magnitude in performance;
they range from embedded devices to data-center solutions. Fueling the hardware
are a dozen or more software frameworks and libraries. The myriad combinations
of ML hardware and ML software make assessing ML-system performance in an
architecture-neutral, representative, and reproducible manner challenging.
There is a clear need for industry-wide standard ML benchmarking and evaluation
criteria. MLPerf Inference answers that call. In this paper, we present our
benchmarking method for evaluating ML inference systems. Driven by more than 30
organizations as well as more than 200 ML engineers and practitioners, MLPerf
prescribes a set of rules and best practices to ensure comparability across
systems with wildly differing architectures. The first call for submissions
garnered more than 600 reproducible inference-performance measurements from 14
organizations, representing over 30 systems that showcase a wide range of
capabilities. The submissions attest to the benchmark's flexibility and
adaptability.Comment: ISCA 202
- …