152 research outputs found

    Fluctuation Theorem for Hidden Entropy Production

    Full text link
    In the general process of eliminating dynamic variables in Markovian models, there exists a difference in the irreversible entropy production between the original and reduced dynamics. We call this difference the hidden entropy production, since it is an invisible quantity when only the reduced system's view is provided. We show that this hidden entropy production obeys a new integral fluctuation theorem for the generic case where all variables are time-reversal invariant, therefore supporting the intuition that entropy production should decrease by coarse graining. It is found, however, that in cases where the condition for our theorem does not hold, entropy production may also increase due to the reduction. The extended multibaker map is investigated as an example for this case.Comment: 5 pages, 1 figur

    Conversation Clustering Based on PLCA Using Within-cluster Sparsity Constraints

    Get PDF
    Publication in the conference proceedings of EUSIPCO, Bucharest, Romania, 201

    Streaming Active Learning for Regression Problems Using Regression via Classification

    Full text link
    One of the challenges in deploying a machine learning model is that the model's performance degrades as the operating environment changes. To maintain the performance, streaming active learning is used, in which the model is retrained by adding a newly annotated sample to the training dataset if the prediction of the sample is not certain enough. Although many streaming active learning methods have been proposed for classification, few efforts have been made for regression problems, which are often handled in the industrial field. In this paper, we propose to use the regression-via-classification framework for streaming active learning for regression. Regression-via-classification transforms regression problems into classification problems so that streaming active learning methods proposed for classification problems can be applied directly to regression problems. Experimental validation on four real data sets shows that the proposed method can perform regression with higher accuracy at the same annotation cost

    Zero-shot domain adaptation of anomalous samples for semi-supervised anomaly detection

    Full text link
    Semi-supervised anomaly detection~(SSAD) is a task where normal data and a limited number of anomalous data are available for training. In practical situations, SSAD methods suffer adapting to domain shifts, since anomalous data are unlikely to be available for the target domain in the training phase. To solve this problem, we propose a domain adaptation method for SSAD where no anomalous data are available for the target domain. First, we introduce a domain-adversarial network to a variational auto-encoder-based SSAD model to obtain domain-invariant latent variables. Since the decoder cannot reconstruct the original data solely from domain-invariant latent variables, we conditioned the decoder on the domain label. To compensate for the missing anomalous data of the target domain, we introduce an importance sampling-based weighted loss function that approximates the ideal loss function. Experimental results indicate that the proposed method helps adapt SSAD models to the target domain when no anomalous data are available for the target domain
    • …
    corecore