Testing autonomous driving algorithms on mobile systems in simulation is an essential step to validate the model and prepare the vehicle for a wide range of (potentially unexpected and critical) conditions. Transferring the model from simulation to reality can be challenging because of the reality gap. Mixed-reality environments enable the evaluation of models on actual vehicles with limited financial or safety risks. Additionally, by allowing for quicker testing and debugging for mobile robots, it could reduce the system's development costs. This paper presents a tentative work for an autonomous navigation framework based on RGB-D cameras. We use an augmentation approach to represent the objects in two contexts in a single environment. The first experiments use KITTI dataset, and then the capabilities of our system were tested on real data by extracting depth maps from the ZED2 camera. Finally, we assess our fusion process by using a pre-trained object detection model