5,650 research outputs found

    Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs

    Full text link
    Deep learning frameworks have been widely deployed on GPU servers for deep learning applications in both academia and industry. In training deep neural networks (DNNs), there are many standard processes or algorithms, such as convolution and stochastic gradient descent (SGD), but the running performance of different frameworks might be different even running the same deep model on the same GPU hardware. In this study, we evaluate the running performance of four state-of-the-art distributed deep learning frameworks (i.e., Caffe-MPI, CNTK, MXNet, and TensorFlow) over single-GPU, multi-GPU, and multi-node environments. We first build performance models of standard processes in training DNNs with SGD, and then we benchmark the running performance of these frameworks with three popular convolutional neural networks (i.e., AlexNet, GoogleNet and ResNet-50), after that, we analyze what factors that result in the performance gap among these four frameworks. Through both analytical and experimental analysis, we identify bottlenecks and overheads which could be further optimized. The main contribution is that the proposed performance models and the analysis provide further optimization directions in both algorithmic design and system configuration.Comment: Published at DataCom'201

    Theory-driven Bilateral Dynamic Preference Learning for Person and Job Match: A Process-oriented Multi-step Multi-objective Method

    Get PDF
    Person-job matching is a typical dynamic process with bilateral interactions between job seekers and jobs, along with sample imbalance issues. These characteristics pose significant challenges when designing an intelligent person-job match method. In this paper, we propose a novel process-oriented view of the person-job matching problem and formulate it as a multi-step multi-objective bilateral match learning problem. Our method combines profile features and historical sequential behaviors to learn the bilateral attributes and dynamic preferences, with multimodal data integrated through various attention mechanisms, such as the orthogonal multi-head and gated mechanisms. The method includes a sequence update module to learn the bilateral preferences and their updates sensitive to feedback. Furthermore, the multi-step constraint effectively solves the problem of imbalanced samples through partial relationships and information transmission between multi-objectives. Abundant experiments show that our method outperforms state-of-the-art methods in providing successful matches and improving recruitment efficiency
    • …
    corecore