Through the 1960s and early 1970s, many cattle feeders formulated feedlot rations composed primarily of grains. Rations high in grain were relatively inexpensive and economical. For example, Scott and Broadbent [20] constructed a programming model in 1972 that utilized the California net energy system as developed by Lofgreen and Garett [16] and adopted by the National Research Council (NRC) of the National Academy of Sciences (NAS) to estimate economical rations. They concluded, In most feedlot operations, it appears that the maximum possible rate of gain w ill be most profitable under usual price relationships” [20, p. 24]. Although maximizing rate of gain is a biological objective, it was congruent with the economic objective of maximizing profits. Therefore, there were several reasons for little interest in investigating the trade-off or substitution rates between roughages and concentrates in the beef feeding ration. First, concentrates were relatively inexpensive. Second, addition of roughages to rations generally reduces rate of gain. A longer time on feed thus increases the nonfeed costs, such as labor, yardage fees, and carrying charges, and reduces the annual volume of a lot. Third, roughages generally are bulkier and more difficult to handle than concentrates. They may require more expensive equipment and large long-term capital investments. Fourth, many feedlots were designed and constructed to provide high-concentrate rations. Hence, little effort was exerted toward investigating the rate of substitution between roughages and concentrates