Machine learning (ML) based systems have been suffering a lack of
interpretability. To address this problem, counterfactual explanations (CEs)
have been proposed. CEs are unique as they provide workable suggestions to
users, in addition to explaining why a certain outcome was predicted. However,
the application of CEs has been hindered by two main challenges, namely general
user preferences and variable ML systems. User preferences, in particular, tend
to be general rather than specific feature values. Additionally, CEs need to be
customized to suit the variability of ML models, while also maintaining
robustness even when these validation models change. To overcome these
challenges, we propose several possible general user preferences that have been
validated by user research and map them to the properties of CEs. We also
introduce a new method called \uline{T}ree-based \uline{C}onditions
\uline{O}ptional \uline{L}inks (T-COL), which has two optional structures and
several groups of conditions for generating CEs that can be adapted to general
user preferences. Meanwhile, a group of conditions lead T-COL to generate more
robust CEs that have higher validity when the ML model is replaced. We compared
the properties of CEs generated by T-COL experimentally under different user
preferences and demonstrated that T-COL is better suited for accommodating user
preferences and variable ML systems compared to baseline methods including
Large Language Models