339 research outputs found

    Measuring Immediate Effect and Carry-over Effect of Multi-channel Online Ads

    Get PDF
    Faced with various online ads, firms are hard to choose the most appropriate advertising channels which have best advertising effects. Online advertising has immediate and carry-over effects. We constructed a comprehensive evaluation model of multi-channel online advertising effects which can evaluate not only immediate effect but also carry-over effect based on lag effect factors. Then, we conducted a restricted grid search and multiple linear regressions to estimate the immediate effect and carry-over effect of paid search ads, mobile phone message ads and e-mail ads based on user behavior data and transaction data of an e-commerce website. The results show that the immediate effect intensity of paid-search ads is the highest, the carry-over effect duration of e-mail ads is the longest, and the cumulative carry-over effect intensity of e-mail ads is the highest. This study puts forward suggestions on how to evaluate the effects of multi-channel online ads more accurately, which can guide this e-commerce website to make better advertising strategy for online marketing

    A Stop-Probability Approach for O-D Service Frequency on High-Speed Railway Lines

    Get PDF
    Train stop planning provides appropriate service for travel demand and stations and plays a significant role in railway operation. This paper formulates stop planning from the point of view of direct travel between origin-destination (O-D) stations and proposes an analytical method to theoretically derive optimal service frequencies for O-D demand on different levels. Considering different O-D demand characteristics and train service types, we introduce the concept of stop probability to present the mathematical formulation for stop planning with the objective of minimizing per capita travel time, which is solved by an iterative algorithm combined with local search. The resulting optimal stop probabilities can be used to calculate the required service frequency for each train type serving different demand categories. Numerical examples, based on three real-life high-speed railway lines, demonstrate the validity of the proposed method. The proposed approach provides a more flexible and practical way for stop planning that explicitly takes into account the importance of different stations and passenger travel characteristics. Document type: Articl

    A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning

    Full text link
    Forgetting refers to the loss or deterioration of previously acquired information or knowledge. While the existing surveys on forgetting have primarily focused on continual learning, forgetting is a prevalent phenomenon observed in various other research domains within deep learning. Forgetting manifests in research fields such as generative models due to generator shifts, and federated learning due to heterogeneous data distributions across clients. Addressing forgetting encompasses several challenges, including balancing the retention of old task knowledge with fast learning of new tasks, managing task interference with conflicting goals, and preventing privacy leakage, etc. Moreover, most existing surveys on continual learning implicitly assume that forgetting is always harmful. In contrast, our survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases, such as privacy-preserving scenarios. By exploring forgetting in a broader context, we aim to present a more nuanced understanding of this phenomenon and highlight its potential advantages. Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting. By examining forgetting beyond its conventional boundaries, in future work, we hope to encourage the development of novel strategies for mitigating, harnessing, or even embracing forgetting in real applications. A comprehensive list of papers about forgetting in various research fields is available at \url{https://github.com/EnnengYang/Awesome-Forgetting-in-Deep-Learning}

    Volatile metabolites profiling of a Chinese mangrove endophytic Pestalotiopsis sp. strain

    Get PDF
    Pestalotiopsis JCM2A4, an endophytic fungus originally isolated from leaves of the Chinese mangrove plant Rhizophora mucronata, produces a mixture of volatile metabolites. As determined by gas chromatography and gas chromatography/mass spectrometry (GC/GC-MS), 18 compounds representing all of the hexane extract were identified. Higher amounts of oil-based straight-chained alkyl (mono- and di-methyl) esters and fatty acids were found to compose major volatile chemotype which accounted for 78.65 and 14.52% of this organism, respectively. The main components was demonstrated to be pentadecanoic acid, 14-methyl-, methyl ester (35.92%); octadecanoic acid, methyl ester (13.10%); nonanedioic acid, dimethyl ester (11.21%); and n-hexadecanoic acid (10.54%). Two of these components were isolated and determined to be n-hexadecanoic acid and elaidic acid by 1H NMR and 1H-1H COSY spectroscopy. Antioxidant activity of the hexane extract and isolated compounds were screened using 2,2'-diphenyl-b-picrylhydrazyl (DPPH) free radical scavenging method. This is the first report to describe the volatile metabolites of mangrove endophytic Pestalotiopsis sp. strain; its specific fatty acid methyl esters (FAME) profile can be used as a tool for microbial source tracking.Keywords: Mangrove endophytic fungus, Pestalotiopsis sp., volatile metabolites, fatty acid methyl esters (FAME) profileAfrican Journal of Biotechnology Vol. 12(24), pp. 3802-380

    Laugh Betrays You? Learning Robust Speaker Representation From Speech Containing Non-Verbal Fragments

    Full text link
    The success of automatic speaker verification shows that discriminative speaker representations can be extracted from neutral speech. However, as a kind of non-verbal voice, laughter should also carry speaker information intuitively. Thus, this paper focuses on exploring speaker verification about utterances containing non-verbal laughter segments. We collect a set of clips with laughter components by conducting a laughter detection script on VoxCeleb and part of the CN-Celeb dataset. To further filter untrusted clips, probability scores are calculated by our binary laughter detection classifier, which is pre-trained by pure laughter and neutral speech. After that, based on the clips whose scores are over the threshold, we construct trials under two different evaluation scenarios: Laughter-Laughter (LL) and Speech-Laughter (SL). Then a novel method called Laughter-Splicing based Network (LSN) is proposed, which can significantly boost performance in both scenarios and maintain the performance on the neutral speech, such as the VoxCeleb1 test set. Specifically, our system achieves relative 20% and 22% improvement on Laughter-Laughter and Speech-Laughter trials, respectively. The meta-data and sample clips have been released at https://github.com/nevermoreLin/Laugh_LSN.Comment: Submitted to ICASSP202

    Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning

    Full text link
    The goal of data-free meta-learning is to learn useful prior knowledge from a collection of pre-trained models without accessing their training data. However, existing works only solve the problem in parameter space, which (i) ignore the fruitful data knowledge contained in the pre-trained models; (ii) can not scale to large-scale pre-trained models; (iii) can only meta-learn pre-trained models with the same network architecture. To address those issues, we propose a unified framework, dubbed PURER, which contains: (1) ePisode cUrriculum inveRsion (ECI) during data-free meta training; and (2) invErsion calibRation following inner loop (ICFIL) during meta testing. During meta training, we propose ECI to perform pseudo episode training for learning to adapt fast to new unseen tasks. Specifically, we progressively synthesize a sequence of pseudo episodes by distilling the training data from each pre-trained model. The ECI adaptively increases the difficulty level of pseudo episodes according to the real-time feedback of the meta model. We formulate the optimization process of meta training with ECI as an adversarial form in an end-to-end manner. During meta testing, we further propose a simple plug-and-play supplement-ICFIL-only used during meta testing to narrow the gap between meta training and meta testing task distribution. Extensive experiments in various real-world scenarios show the superior performance of ours

    Learning to Learn from APIs: Black-Box Data-Free Meta-Learning

    Full text link
    Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-learning from a collection of pre-trained models without access to the training data. Existing DFML work can only meta-learn from (i) white-box and (ii) small-scale pre-trained models (iii) with the same architecture, neglecting the more practical setting where the users only have inference access to the APIs with arbitrary model architectures and model scale inside. To solve this issue, we propose a Bi-level Data-free Meta Knowledge Distillation (BiDf-MKD) framework to transfer more general meta knowledge from a collection of black-box APIs to one single meta model. Specifically, by just querying APIs, we inverse each API to recover its training data via a zero-order gradient estimator and then perform meta-learning via a novel bi-level meta knowledge distillation structure, in which we design a boundary query set recovery technique to recover a more informative query set near the decision boundary. In addition, to encourage better generalization within the setting of limited API budgets, we propose task memory replay to diversify the underlying task distribution by covering more interpolated tasks. Extensive experiments in various real-world scenarios show the superior performance of our BiDf-MKD framework
    • …
    corecore