1,029 research outputs found
A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction
The Nonlinear autoregressive exogenous (NARX) model, which predicts the
current value of a time series based upon its previous values as well as the
current and past values of multiple driving (exogenous) series, has been
studied for decades. Despite the fact that various NARX models have been
developed, few of them can capture the long-term temporal dependencies
appropriately and select the relevant driving series to make predictions. In
this paper, we propose a dual-stage attention-based recurrent neural network
(DA-RNN) to address these two issues. In the first stage, we introduce an input
attention mechanism to adaptively extract relevant driving series (a.k.a.,
input features) at each time step by referring to the previous encoder hidden
state. In the second stage, we use a temporal attention mechanism to select
relevant encoder hidden states across all time steps. With this dual-stage
attention scheme, our model can not only make predictions effectively, but can
also be easily interpreted. Thorough empirical studies based upon the SML 2010
dataset and the NASDAQ 100 Stock dataset demonstrate that the DA-RNN can
outperform state-of-the-art methods for time series prediction.Comment: International Joint Conference on Artificial Intelligence (IJCAI),
201
Cerberus: A Deep Learning Hybrid Model for Lithium-Ion Battery Aging Estimation and Prediction Based on Relaxation Voltage Curves
The degradation process of lithium-ion batteries is intricately linked to
their entire lifecycle as power sources and energy storage devices,
encompassing aspects such as performance delivery and cycling utilization.
Consequently, the accurate and expedient estimation or prediction of the aging
state of lithium-ion batteries has garnered extensive attention. Nonetheless,
prevailing research predominantly concentrates on either aging estimation or
prediction, neglecting the dynamic fusion of both facets. This paper proposes a
hybrid model for capacity aging estimation and prediction based on deep
learning, wherein salient features highly pertinent to aging are extracted from
charge and discharge relaxation processes. By amalgamating historical capacity
decay data, the model dynamically furnishes estimations of the present capacity
and forecasts of future capacity for lithium-ion batteries. Our approach is
validated against a novel dataset involving charge and discharge cycles at
varying rates. Specifically, under a charging condition of 0.25C, a mean
absolute percentage error (MAPE) of 0.29% is achieved. This outcome underscores
the model's adeptness in harnessing relaxation processes commonly encountered
in the real world and synergizing with historical capacity records within
battery management systems (BMS), thereby affording estimations and
prognostications of capacity decline with heightened precision.Comment: 3 figures, 1 table, 9 page
Security Games with Information Leakage: Modeling and Computation
Most models of Stackelberg security games assume that the attacker only knows
the defender's mixed strategy, but is not able to observe (even partially) the
instantiated pure strategy. Such partial observation of the deployed pure
strategy -- an issue we refer to as information leakage -- is a significant
concern in practical applications. While previous research on patrolling games
has considered the attacker's real-time surveillance, our settings, therefore
models and techniques, are fundamentally different. More specifically, after
describing the information leakage model, we start with an LP formulation to
compute the defender's optimal strategy in the presence of leakage. Perhaps
surprisingly, we show that a key subproblem to solve this LP (more precisely,
the defender oracle) is NP-hard even for the simplest of security game models.
We then approach the problem from three possible directions: efficient
algorithms for restricted cases, approximation algorithms, and heuristic
algorithms for sampling that improves upon the status quo. Our experiments
confirm the necessity of handling information leakage and the advantage of our
algorithms
Ultralow frequency noise laser by locking to an optical fiber delay line
International audienceWe report the frequency stabilization of an erbium-doped fiber distributed-feedback laser using an all-fiber based Michelson interferometer of large arm imbalance. The interferometer uses a 1 km SMF-28 optical fiber spool and an acousto optic modulator allowing heterodyne detection. The frequency noise power spectral density is reduced by more than 40 dB for Fourier frequencies ranging from 1 Hz to 10 kHz, corresponding to a level well below 1 Hz^2/Hz over the whole range. It reaches 10^{-2} Hz^2/Hz at 1 kHz. Between 40 Hz and 30 kHz, the frequency noise is shown to be comparable to the one obtained by Pound-Drever-Hall locking to a high finesse Fabry-Perot cavity. Locking to a fiber delay line could consequently represent a reliable, simple and compact alternative to cavity stabilization for short term linewidth reduction
High-resolution optical frequency dissemination on a telecommunication network with data traffic
We transferred the frequency of an ultra-stable laser over a 108 km urban
fiber link comprising 22 km of optical communications network fiber
simultaneously carrying Internet data traffic. The metrological signal and the
digital data signal are transferred on two different frequency channels in a
dense wavelength division multiplexing scheme. The metrological signal is
inserted into and extracted from the communications network by using
bidirectional off-the-shelf optical add-drop multiplexers. The link-induced
phase noise is measured and cancelled with round-trip technique using an
all-fiber-based interferometer. The compensated link shows an Allan deviation
of a few 10-16 at one second and below 10-19 at 10,000 seconds. This opens the
way to a wide dissemination of ultra stable optical clock signals between
distant laboratories via the Internet network
- …