Competency questions (CQs) are used in ontology development to demarcate the scope, provide insights into their content,and verification. Their use has been impeded by problems with authoring good CQs. This may be assisted by a con-trolled natural language (CNL), but its development is time-consuming when carried out manually. A recent study on data-driven CNL design to learn templates from a set of CQs, resulting in CLaRO,had somewhat better coverage and some noise due to grammar errors in the source CQs. In this paper, we aim to investigate such a bottom-up approach to CNL development for CQs regarding the effects of 1) improving the quality of the source data 2) whether more CQs from other do-mains induce more templates and 3) if the structure of knowledge in subject do-mains has a role to play in the matching of patterns to templates; therewith might indicate that possibly a structure of knowledge in a subject domain may continue to affect bottom-up CNL creation.The CQ cleaning increased the number of templates from 93 to 120 main templates and an additional 12 variants. The new CQ dataset of 92 CQs generated 27new templates and 7 more variants. Thus,increasing the domain coverage had the most effect on the CNL. The CLaRO v2with all generated templates has 147 templates and 59 variants thereof and showed94.1% coverage