42 research outputs found
Results for synthetic data. (a) Four synthetic ICs (b) Observed signals.
<p>Results for synthetic data. (a) Four synthetic ICs (b) Observed signals.</p
Results for image separation (The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details).
<p>(a) Four original images; (b) Four mixture images; (c) Recovered images by new ICA-R; (d) Recovered images by FastICA; (e) Recovered images by EFICA; (f) Recovered images by JADE;(g) Recovered images by NGFICA.</p
The comparison of the results of all the recovered images.
<p>THE SNR RATE INDICATES THE SUPERIORITY OF THE NEW ICA-R ALGORITHM OVER THE CLASSICAL FASTICA GREATLY.</p
The comparison of fault convergences (two defective learning behaviors) and mean time (s) consumption between previous and new ICA-R algorithms on synthetic data.
<p>EACH ALGORITHM WAS RUN FOR 1000 TIMES WITH LEARNING RATE EQUAL TO 0.1.</p
Illustration of two typical learning traces of demixing vector by previous ICA-R algorithm.
<p>(a) An example of accurate convergence for desired demixing vector (left) along with the evolutions of vs the number of iterations (right). On plane, each step of is presented by small cycles and linked by a line; and the ellipse is the confine defined by equality constraint. Below the plane, the curve stands for the values of for projections as a function of ellipse. The red line highlights the region fulfilling the inequality constraint. (b) An example of misconvergence (left) where the algorithm is trapped around the inequality constraint border along with the corresponding evolutions of (right). (c). 2-D illustration of the misconvergence example on plane. (d) The magnification of the black box in (c). The image of magnification manifests that the learning trace librates and stops at the red dot (the blue cycles are removed for visual convenience).</p
SCI of coding vectors under different levels of corruptions.
<p>SCI of coding vectors under different levels of corruptions.</p
Recognition rates for test samples with different level of random corruption.
<p>(A) 10% random corruption (B) 20% random corruption. (C) 30% random corruption (D) 40% random corruption.</p