2 research outputs found

    Between-Domain Instance Transition Via the Process of Gibbs Sampling in RBM

    Full text link
    In this paper, we present a new idea for Transfer Learning (TL) based on Gibbs Sampling. Gibbs sampling is an algorithm in which instances are likely to transfer to a new state with a higher possibility with respect to a probability distribution. We find that such an algorithm can be employed to transfer instances between domains. Restricted Boltzmann Machine (RBM) is an energy based model that is very feasible for being trained to represent a data distribution and also for performing Gibbs sampling. We used RBM to capture data distribution of the source domain and use it in order to cast target instances into new data with a distribution similar to the distribution of source data. Using datasets that are commonly used for evaluation of TL methods, we show that our method can successfully enhance target classification by a considerable ratio. Additionally, the proposed method has the advantage over common DA methods that it needs no target data during the process of training of models

    Air-Writing Translater: A Novel Unsupervised Domain Adaptation Method for Inertia-Trajectory Translation of In-air Handwriting

    Full text link
    As a new way of human-computer interaction, inertial sensor based in-air handwriting can provide a natural and unconstrained interaction to express more complex and richer information in 3D space. However, most of the existing in-air handwriting work is mainly focused on handwritten character recognition, which makes these work suffer from poor readability of inertial signal and lack of labeled samples. To address these two problems, we use unsupervised domain adaptation method to reconstruct the trajectory of inertial signal and generate inertial samples using online handwritten trajectories. In this paper, we propose an AirWriting Translater model to learn the bi-directional translation between trajectory domain and inertial domain in the absence of paired inertial and trajectory samples. Through semantic-level adversarial training and latent classification loss, the proposed model learns to extract domain-invariant content between inertial signal and trajectory, while preserving semantic consistency during the translation across the two domains. We carefully design the architecture, so that the proposed framework can accept inputs of arbitrary length and translate between different sampling rates. We also conduct experiments on two public datasets: 6DMG (in-air handwriting dataset) and CT (handwritten trajectory dataset), the results on the two datasets demonstrate that the proposed network successes in both Inertia-to Trajectory and Trajectory-to-Inertia translation tasks
    corecore