29 research outputs found

    Reconstruction of AE with shared synapse architecture.

    No full text
    <p>Reprinted from [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0194049#pone.0194049.ref041" target="_blank">41</a>] under a CC BY license, with permission from Springer Nature, original copyright 2016 (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0194049#pone.0194049.s001" target="_blank">S1 File</a>).</p

    Stacked autoencoder.

    No full text
    <p>Stacked autoencoder.</p

    Relationship between accuracy and bit width.

    No full text
    <p>Relationship between accuracy and bit width.</p

    Equations for the number of clock cycles.

    No full text
    <p>Equations for the number of clock cycles.</p

    Training results represented with cross entropy errors.

    No full text
    <p>Training results represented with cross entropy errors.</p

    Performance and recource comparison of various structure of AEs.

    No full text
    <p>Performance and recource comparison of various structure of AEs.</p

    A shared synapse architecture for efficient FPGA implementation of autoencoders

    No full text
    <div><p>This paper proposes a shared synapse architecture for autoencoders (AEs), and implements an AE with the proposed architecture as a digital circuit on a field-programmable gate array (FPGA). In the proposed architecture, the values of the synapse weights are shared between the synapses of an input and a hidden layer, and between the synapses of a hidden and an output layer. This architecture utilizes less of the limited resources of an FPGA than an architecture which does not share the synapse weights, and reduces the amount of synapse modules used by half. For the proposed circuit to be implemented into various types of AEs, it utilizes three kinds of parameters; one to change the number of layers’ units, one to change the bit width of an internal value, and a learning rate. By altering a network configuration using these parameters, the proposed architecture can be used to construct a stacked AE. The proposed circuits are logically synthesized, and the number of their resources is determined. Our experimental results show that single and stacked AE circuits utilizing the proposed shared synapse architecture operate as regular AEs and as regular stacked AEs. The scalability of the proposed circuit and the relationship between the bit widths and the learning results are also determined. The clock cycles of the proposed circuits are formulated, and this formula is used to estimate the theoretical performance of the circuit when the circuit is used to construct arbitrary networks.</p></div

    Comparison of the implementations of the AEs.

    No full text
    <p>Comparison of the implementations of the AEs.</p

    Comparison of learning results.

    No full text
    <p>Comparison of learning results.</p

    Stacked AE with shared synapse architecture.

    No full text
    <p>Stacked AE with shared synapse architecture.</p
    corecore