12,026 research outputs found
Characterizing Intermittency of 4-Hz Quasi-periodic Oscillation in XTE J1550-564 using Hilbert-Huang Transform
We present the time-frequency analysis results based on the Hilbert-Huang
transform (HHT) for the evolution of a 4-Hz low-frequency quasi-periodic
oscillation (LFQPO) around the black hole X-ray binary XTE J1550-564. The
origin of LFQPOs is still debated. To understand the cause of the peak
broadening, we utilized a recently developed time-frequency analysis, HHT, for
tracking the evolution of the 4-Hz LFQPO from XTE J1550 564. By adaptively
decomposing the ~4-Hz oscillatory component from the light curve and acquiring
its instantaneous frequency, the Hilbert spectrum illustrates that the LFQPO is
composed of a series of intermittent oscillations appearing occasionally
between 3 Hz and 5 Hz. We further characterized this intermittency by computing
the confidence limits of the instantaneous amplitudes of the intermittent
oscillations, and constructed both the distributions of the QPO's high and low
amplitude durations, which are the time intervals with and without significant
~4-Hz oscillations, respectively. The mean high amplitude duration is 1.45 s
and 90% of the oscillation segments have lifetimes below 3.1 s. The mean low
amplitude duration is 0.42 s and 90% of these segments are shorter than 0.73 s.
In addition, these intermittent oscillations exhibit a correlation between the
oscillation's rms amplitude and mean count rate. This correlation could be
analogous to the linear rms-flux relation found in the 4-Hz LFQPO through
Fourier analysis. We conclude that the LFQPO peak in the power spectrum is
broadened owing to intermittent oscillations with varying frequencies, which
could be explained by using the Lense-Thirring precession model.Comment: 27 pages, 9 figures, accepted for publication in The Astrophysical
Journa
Unifying and Merging Well-trained Deep Neural Networks for Inference Stage
We propose a novel method to merge convolutional neural-nets for the
inference stage. Given two well-trained networks that may have different
architectures that handle different tasks, our method aligns the layers of the
original networks and merges them into a unified model by sharing the
representative codes of weights. The shared weights are further re-trained to
fine-tune the performance of the merged model. The proposed method effectively
produces a compact model that may run original tasks simultaneously on
resource-limited devices. As it preserves the general architectures and
leverages the co-used weights of well-trained networks, a substantial training
overhead can be reduced to shorten the system development time. Experimental
results demonstrate a satisfactory performance and validate the effectiveness
of the method.Comment: To appear in the 27th International Joint Conference on Artificial
Intelligence and the 23rd European Conference on Artificial Intelligence,
2018. (IJCAI-ECAI 2018
- …