The development of controllers for autonomous intelligent agents given a simple task is relatively straightforward and basic techniques can be used to develop such controllers. However, as agents are given more than one task, using basic techniques for developing effective controllers quickly becomes impractical. State and action abstraction are frequently used to counter this explosion of complexity and to make the development of effective controllers for complex problems practical. Unfortunately, most of the work in the literature has focused on complex tasks comprised of sequences of simpler tasks and the more complex tasks comprised of many concurrent, interfering, and non-episodic (CINE) tasks have received little attention. As a result, this dissertation seeks to address this deficiency by providing the first known empirical investigation into the effects of each of these types of abstraction on CINE tasks. The results of this investigation demonstrate that for the single-agent and multi-agent problem domains used, abstraction of the controller's actions provides more benefits in the development and performance of effective controllers than abstraction of the agent's state.Since there is a lack of work focusing on complex CINE tasks, advances in the implementation and development of controllers capable of addressing such tasks were required. First, we demonstrate that the adaptive fuzzy behavior hierarchy control architecture used in this dissertation has issues when scaled to hierarchies of more than two levels. To address these issues, we introduce a modification to the architecture's implementation that significantly improves the performance of controllers using the same behavior hierarchy. Second, we demonstrate that one of the few known reinforcement learning approaches specifically designed to handle complex CINE tasks is unable to converge to an effective policy for the tasks used here. As a result, we introduce a new reinforcement learning approach that leverages the hierarchical implementation of the controller which is capable of providing statistically significantly better performance in significantly fewer learning experiences. Next, we demonstrate that controllers using adaptive fuzzy behavior hierarchies are able to reuse, without modification, controllers developed for simple tasks in hierarchical controllers developed for a more complex task. Lastly, we demonstrate that since adaptive fuzzy behavior hierarchies effectively use action abstraction, the agent's state can be significantly abstracted in the higher levels of the controller using adaptive priorities which reflect the applicability of lower level behaviors to the agent's current state