How would you estimate the size of an authorâ€™s vocabulary? Suppose you have a analyzed the authorâ€™s available works and found *n* words, *x* of which are unique. Then you know the authorâ€™s vocabulary was at least *x*, but itâ€™s reasonable to assume that the author may have know words he never used in writing, or that at least not in works you have access to.

Brainerd [1] suggested the following estimator based on a Markov chain model of language. The estimated vocabulary is the numberÂ *N* satisfying the equation

The left side is a decreasing function of *N*, so you could solve the equation by finding a values of *N* that make the sum smaller and larger than *n*, then use a bisection algorithm.

We can see that the model is qualitatively reasonable. If every word is unique, i.e. *x* = *n*, then the solution is *N* = âˆž. If you havenâ€™t seen any repetition, you the author could keep writing new words indefinitely. As the amount of repetition increases, the estimate of *N* decreases.

Brainerdâ€™s model is simple, but it tends to underestimate vocabulary. More complicated models might do a better job.

Problems analogous to estimating vocabulary size come up in other applications. For example, an ecologist might want to estimate the number of new species left to be found based on the number of species seen so far. In my work in data privacy I occasionally have to estimate diversity in a population based on diversity in a sample. Both of these examples are analogous to estimating potential new words based on the words youâ€™ve seen.

[1] Brainerd, B. On the relation between types and tokes in literary text, J. Appl. Prob. 9, pp. 507-5