With the success of deep learning techniques in a broad range of application
domains, many deep learning software frameworks have been developed and are
being updated frequently to adapt to new hardware features and software
libraries, which bring a big challenge for end users and system administrators.
To address this problem, container techniques are widely used to simplify the
deployment and management of deep learning software. However, it remains
unknown whether container techniques bring any performance penalty to deep
learning applications. The purpose of this work is to systematically evaluate
the impact of docker container on the performance of deep learning
applications. We first benchmark the performance of system components (IO, CPU
and GPU) in a docker container and the host system and compare the results to
see if there's any difference. According to our results, we find that
computational intensive jobs, either running on CPU or GPU, have small overhead
indicating docker containers can be applied to deep learning programs. Then we
evaluate the performance of some popular deep learning tools deployed in a
docker container and the host system. It turns out that the docker container
will not cause noticeable drawbacks while running those deep learning tools. So
encapsulating deep learning tool in a container is a feasible solution.Comment: Conference: BIgCom2017, 9 page