Currently, there is a significant amount of research being conducted in the
field of artificial intelligence to improve the explainability and
interpretability of deep learning models. It is found that if end-users
understand the reason for the production of some output, it is easier to trust
the system. Recommender systems are one example of systems that great efforts
have been conducted to make their output more explainable. One method for
producing a more explainable output is using counterfactual reasoning, which
involves altering minimal features to generate a counterfactual item that
results in changing the output of the system. This process allows the
identification of input features that have a significant impact on the desired
output, leading to effective explanations. In this paper, we present a method
for generating counterfactual explanations for both tabular and textual
features. We evaluated the performance of our proposed method on three
real-world datasets and demonstrated a +5\% improvement on finding effective
features (based on model-based measures) compared to the baseline method