28 research outputs found
Average E<sub>SNR</sub> obtained on the EEG test sets.
n: chunk dimension. %n: sparsity expressed as a percentage of n. k: sparsity level. DK-SVD: ESNR obtained referring to the dictionary learnt by K-SVD. DR-SVD: ESNR obtained referring to the dictionary learnt by R-SVD.</p
Average reconstruction error E<sub>SNR</sub> for patches 9 × 9 and 16 × 16 using dictionary learnt by K-SVD (dashed lines) and R-SVD (solid lines).
<p>Averages are calculated over 50 trials and plotted versus update iteration count. Considered sparsity levels: <i>k</i> = 5, 10, 20, 30.</p
Gap between final (<i>T</i> = 200) E<sub>SNR</sub> of K-SVD and R-SVD obtained with all parameter combinations <i>L</i> = 2000, 4000, 6000, 8000, 10000 and SNR = 0, 10, 20, 30, 40, 50, 60, ∞ (no noise).
<p>Results are averages over 100 trials; points are interpolated with coloured piece-wise planar surface for sake of readability. <i>Left</i>: with sparsity <i>k</i> = 5. <i>Right</i>: with sparsity <i>k</i> = 10.</p
Average E<sub>SNR</sub> obtained on the image test sets.
<p><i>n</i>: linear patch dimension. <i>k</i>: sparsity level. Last three columns are E<sub>SNR</sub> achieved with initial untrained dictionary (<i>D</i><sub>init</sub>), dictionary learnt by the K-SVD method (<i>D</i><sub>K-SVD</sub>) and dictionary learnt by the R-SVD method (<i>D</i><sub>R-SVD</sub>).</p
Average reconstruction error E<sub>SNR</sub> for EEG signal chunks of length <i>n</i> = 150 and <i>n</i> = 300 using dictionary learnt by K-SVD (dashed lines) and R-SVD (solid lines) of dimensions <i>n</i> × 2<i>n</i>.
<p>Averages are calculated over 50 trials and plotted versus update iteration count. Considered sparsity levels: <i>k</i> = 5%, 10%, 20%, 30% of <i>n</i>.</p
R-SVD’s dependency on the group size parameter <i>s</i>.
Other experiment parameters are: training size L = 8000, dictionary size 50 × 100, additive noise of SNR = 30 dB, number of iterations T = 200. The lines (connecting points, for sake of readability) represent: average final ESNR of the reconstructed dictionary (solid blue curve) w.r.t. the generating dictionary, computational time of the R-SVD (dashed red curve) and the K-SVD (dotted red line) in the dictionary learning task.</p
Comparison of K-SVD and R-SVD errors when <i>k</i>-Limaps (top) or SL0 (bottom) sparse decomposition methods are used.
<p>The surface represents the gap between final (<i>T</i> = 200) E<sub>SNR</sub> of K-SVD and R-SVD obtained with all parameter combinations <i>L</i> = 2000, 4000, 6000, 8000, 10000 and SNR = 0, 10, 20, 30, 40, 50, 60, ∞ (no noise). Results are averages over 100 trials; points are interpolated with coloured piece-wise planar surface for sake of readability. <i>Left</i>: with sparsity <i>k</i> = 5. <i>Right</i>: with sparsity <i>k</i> = 10.</p
Average number of atoms correctly recovered (matched) by K-SVD and R-SVD algorithms at various SNR levels of additive noise on dictionary <i>D</i> of size 50 × 100 and 100 × 200.
<p><i>L</i> = 10000, and remaining parameter values as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0169663#pone.0169663.g003" target="_blank">Fig 3</a>.</p
Average reconstruction error E<sub>SNR</sub> in sparse representation using dictionary learnt by K-SVD (non-solid lines) and R-SVD (solid lines), for <i>L</i> = 10000 synthetic vectors varying the additive noise power (in the legend).
<p>Averages are calculated over 100 trials and plotted versus update iteration count. <i>Left</i>: with sparsity <i>k</i> = 5, <i>Right</i>: with sparsity <i>k</i> = 10.</p
Experiments on the ECG recordings s20011 and s20031 taken from the Long-Term ST Database.
<p>(Upper plots) E<sub>SNR</sub> vs CR achieved by the sparsity-based OMP compressor on dictionary learnt by R-SVD, K-SVD or on a random untrained dictionary. (Lower plots) Computational time spent by the two techniques in the learning stage.</p
