Fog and Edge computing extend cloud services to the proximity of end users,
allowing many Internet of Things (IoT) use cases, particularly latency-critical
applications. Smart devices, such as traffic and surveillance cameras, often do
not have sufficient resources to process computation-intensive and
latency-critical services. Hence, the constituent parts of services can be
offloaded to nearby Edge/Fog resources for processing and storage. However,
making offloading decisions for complex services in highly stochastic and
dynamic environments is an important, yet difficult task. Recently, Deep
Reinforcement Learning (DRL) has been used in many complex service offloading
problems; however, existing techniques are most suitable for centralized
environments, and their convergence to the best-suitable solutions is slow. In
addition, constituent parts of services often have predefined data dependencies
and quality of service constraints, which further intensify the complexity of
service offloading. To solve these issues, we propose a distributed DRL
technique following the actor-critic architecture based on Asynchronous
Proximal Policy Optimization (APPO) to achieve efficient and diverse
distributed experience trajectory generation. Also, we employ PPO clipping and
V-trace techniques for off-policy correction for faster convergence to the most
suitable service offloading solutions. The results obtained demonstrate that
our technique converges quickly, offers high scalability and adaptability, and
outperforms its counterparts by improving the execution time of heterogeneous
services