In multi-objective problems, it is key to find compromising solutions that balance different objectives. The linear scalarization function is often utilized to translate the multi-objective nature of a problem into a standard, single-objective problem. Generally, it is noted that such as linear combination can only find solutions in convex areas of the Pareto front, therefore making the method inapplicable in situations where the shape of the front is not known beforehand. We propose a non-linear scalarization function, called the Chebyshev scalarization function in multi-objective reinforcement learning. We show that the Chebyshev scalarization method overcomes the flaws of the linear scalarization function and is able to discover all Pareto optimal solutions in non-convex environments