Designing Semantics Dropout Noise And Enforcing Sparsity

Abstract

We examine one profound learning technique named stacked denoising autoencoder (SDA). SDA stacks a few denoising autoencoders and connects the yield of each layer as the learned portrayal. Each denoising autoencoder in SDA is prepared to recoup the information from a ruined form of it. We build up another content portrayal display in view of a variation of SDA: marginalized stacked denoising autoencoders (mSDA), which receives straight rather than nonlinear projection to quicken preparing and minimizes limitless commotion dissemination keeping in mind the end goal to take in more vigorous portrayals. We use semantic data to grow mSDA and create Semantic-upgraded Marginalized Stacked Denoising Autoencoders (smSDA). The semantic data comprises of bullying words

    Similar works