We discuss inequalities holding between the vocabulary size, i.e., the number
of distinct nonterminal symbols in a grammar-based compression for a string,
and the excess length of the respective universal code, i.e., the code-based
analog of algorithmic mutual information. The aim is to strengthen inequalities
which were discussed in a weaker form in linguistics but shed some light on
redundancy of efficiently computable codes. The main contribution of the paper
is a construction of universal grammar-based codes for which the excess lengths
can be bounded easily.Comment: 5 pages, accepted to ISIT 2007 and correcte