Knowledge Graph (KG) represents the real world\u27s information in the form of triplets (head, relation, and tail). However, most KGs are generated manually or semi-automatically, which resulted in an enormous number of missing information in a KG. The goal of a Knowledge-Graph Completion task is to predict missing links in a given Knowledge Graph. Various approaches exist to predict a missing link in a KG. However, the most prominent approaches are based on tensor factorization and Knowledge-Graph embeddings, such as RotatE and SimplE. The RotatE model depicts each relation as a rotation from the source entity (Head) to the target entity (Tail) via a complex vector space. In RotatE, the head and tail entities are derived from one embedding-generation class, resulting in a relatively low prediction score. SimplE is primarily based on a Canonical Polyadic (CP) decomposition. SimplE enhances the CP approach by adding the inverse relation where head embedding and tail embedding are taken from the different embedding-generation classes, but they are still dependent on each other. However, SimplE is not able to predict composition patterns very well. This paper presents a new, hybridized variant (HRotatE) of the existent RotatE approach. Essentially, HRotatE is hybridized from RotatE and SimplE. We have used the principle of inverse embedding (from the SimplE model) in a bid to improve the prediction scores of HRotatE. Hence, our results have proven to be better than the native RotatE. Also, HRotatE outperforms several state-of-the-art models on different datasets. Conclusively, our proposed approach (HRotatE) is relatively efficient such that it utilizes half the number of training steps required by RotatE, and it generates approximately the same result as RotatE