28 research outputs found

    Average E<sub>SNR</sub> obtained on the EEG test sets.

    No full text
    n: chunk dimension. %n: sparsity expressed as a percentage of n. k: sparsity level. DK-SVD: ESNR obtained referring to the dictionary learnt by K-SVD. DR-SVD: ESNR obtained referring to the dictionary learnt by R-SVD.</p

    Average reconstruction error E<sub>SNR</sub> for patches 9 × 9 and 16 × 16 using dictionary learnt by K-SVD (dashed lines) and R-SVD (solid lines).

    No full text
    <p>Averages are calculated over 50 trials and plotted versus update iteration count. Considered sparsity levels: <i>k</i> = 5, 10, 20, 30.</p

    Gap between final (<i>T</i> = 200) E<sub>SNR</sub> of K-SVD and R-SVD obtained with all parameter combinations <i>L</i> = 2000, 4000, 6000, 8000, 10000 and SNR = 0, 10, 20, 30, 40, 50, 60, ∞ (no noise).

    No full text
    <p>Results are averages over 100 trials; points are interpolated with coloured piece-wise planar surface for sake of readability. <i>Left</i>: with sparsity <i>k</i> = 5. <i>Right</i>: with sparsity <i>k</i> = 10.</p

    Average E<sub>SNR</sub> obtained on the image test sets.

    No full text
    <p><i>n</i>: linear patch dimension. <i>k</i>: sparsity level. Last three columns are E<sub>SNR</sub> achieved with initial untrained dictionary (<i>D</i><sub>init</sub>), dictionary learnt by the K-SVD method (<i>D</i><sub>K-SVD</sub>) and dictionary learnt by the R-SVD method (<i>D</i><sub>R-SVD</sub>).</p

    R-SVD’s dependency on the group size parameter <i>s</i>.

    No full text
    Other experiment parameters are: training size L = 8000, dictionary size 50 × 100, additive noise of SNR = 30 dB, number of iterations T = 200. The lines (connecting points, for sake of readability) represent: average final ESNR of the reconstructed dictionary (solid blue curve) w.r.t. the generating dictionary, computational time of the R-SVD (dashed red curve) and the K-SVD (dotted red line) in the dictionary learning task.</p

    Comparison of K-SVD and R-SVD errors when <i>k</i>-Limaps (top) or SL0 (bottom) sparse decomposition methods are used.

    No full text
    <p>The surface represents the gap between final (<i>T</i> = 200) E<sub>SNR</sub> of K-SVD and R-SVD obtained with all parameter combinations <i>L</i> = 2000, 4000, 6000, 8000, 10000 and SNR = 0, 10, 20, 30, 40, 50, 60, ∞ (no noise). Results are averages over 100 trials; points are interpolated with coloured piece-wise planar surface for sake of readability. <i>Left</i>: with sparsity <i>k</i> = 5. <i>Right</i>: with sparsity <i>k</i> = 10.</p

    Average number of atoms correctly recovered (matched) by K-SVD and R-SVD algorithms at various SNR levels of additive noise on dictionary <i>D</i> of size 50 × 100 and 100 × 200.

    No full text
    <p><i>L</i> = 10000, and remaining parameter values as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0169663#pone.0169663.g003" target="_blank">Fig 3</a>.</p

    Experiments on the ECG recordings s20011 and s20031 taken from the Long-Term ST Database.

    No full text
    <p>(Upper plots) E<sub>SNR</sub> vs CR achieved by the sparsity-based OMP compressor on dictionary learnt by R-SVD, K-SVD or on a random untrained dictionary. (Lower plots) Computational time spent by the two techniques in the learning stage.</p
    corecore