Though the experiences of life exhibit unceasing variety, people are able to find constancy and deal with their world in a regular and predictable manner. This thesis promotes the hypothesis that the necessary abstractions can be learned. The specific task studied is inducing a concept description from examples. A model is presented. that relies on a weighted, symbolic description of concepts. Though the description is distributed, novel examples are classified holistically by combining each portion's contribution. Each new example also refines the concept description: internal weights are updated and new symbolic structures are introduced. These actions improve description quality as measured by classification accuracy over novel examples.Initially the concept description is highly distributed, being composed of many simple components. As learning progresses, sophisticated descriptive structures are added, and eventually the description is coalesced into a few highly predictive components. This qualitative change in the representation of the concept is a unique feature of the model.The model extends previous work by allowing for noisy examples, unknown values, and concept change over time. To bolster claims of robustness, several experiments illustrating the model's behavior are reported. Key results illustrate that the model should scale-up to larger tasks than those studied and have a number of potential applications