Is your answer correct?

You have examined a sample of N items, looking for some specified feature of interest, and you find that k items exhibit this feature. This gives you a point estimate, p = k/N, for the proportion of the total, unobserved population that exhibits the feature. It can be shown that, given only this one sample, p is the maximum-likelihood (ML) estimate of the true, usually unknown proportion.


Pi = 3.141592653...

Of the first 10 digits of Pi, only 3 are even numbers. According to theory, even and odd digits of any irrational number, such as Pi, are supposed to be equally probable. Does this little experiment mean that the theory is invalid?

If you try this experiment again (e.g., look at ten digits further along), you will probably get a different result. Moreover, if you examine a sequence with an odd number of digits, then you can never get the "correct" answer. What is going on here?

Clearly, there is a range of possible results from experiments such as these and you should not take any single result too seriously, even if it is the ML result -- meaning that any other result is less likely.


If you desire that your probability of being wrong be no greater than X, then what range should you report for an observed proportion, p?


It depends.

First of all, the question above does not completely define the goal and there are different kinds of answers. Even more important, it says nothing about whether you have done this experiment before. For instance, if this is a routine task that you have carried out twice a day for the past ten years, then you already know the degree to which this point estimate is representative of the population comprising all such results. This prior information would have a very strong influence on the range that you reported. In fact, if you are a real expert, then you could report a valid range without even looking at this sample!

The range described above is called a confidence interval.1 Most often cited is the central confidence interval for which the probability of being wrong is divided equally into a range of proportions below the interval and another range (usually of different size) above the interval. Alternatively, the shortest (narrowest) such interval is sometimes desired. In either case, the corresponding confidence limits define the boundaries of the interval.

A Bayesian Calculator

The calculator on this page computes both a central confidence interval as well as the shortest such interval for an observed proportion based on the assumption that you have no prior information whatsoever. In other words, as far as you know, the true proportion in the parent population could be any number in the range [0, 1], with all possibilities being equally likely. The less valid this assumption, the less reliable will be the confidence limits computed here. If this seems a bit severe, remember that virtually all statistical tests make assumptions, often hidden.

If the stated assumption is true, then the confidence limits computed by this calculator are exact (to the precision shown), not an approximation. Some Technical Details are described below.


Enter Click Compute. Any previous results will be immediately erased. When the computation is finished, the new results will be displayed.2

# Successes =
  Proportion =
# Examined =
Confidence =

Central Confidence Interval:
Lower limit =
Upper limit =
Shortest Confidence Interval:
Lower limit =
Upper limit =

If you try this with 3-out-of-10, as in the experiment above, you will find that the theoretical answer, 0.5, lies within either of the 95-percent confidence intervals. Hence, you can be "95% sure" that the observed 3/10 does not contradict theory. If you need to be only 90% sure, enter 0.9 for the Confidence. Does the theory still hold up? How big must you allow your chance of being wrong, X, in order to assert that these 10 data points contradict theory?

Technical Details

The calculation performed here is a Bayesian calculation because it combines an explicit prior distribution, pdf0(), summarizing all information known before looking at the sample, plus a distribution, pdf1(), describing the point estimate, to yield a posterior distribution, pdf2(), for the desired result given the prior information. In this case, wherein pdf0() is Uniform(0, 1)3 and pdf1() is Binomial(),3 it can be shown that pdf2() is Beta().3

For each confidence limit, this calculator computes the inverse of the appropriate cumulative Beta distribution, with parameters determined by the values that you enter.

What if I'm not so ignorant?

Then you need more than this webpage :-)

The short answer is that the calculation done here is extensible and relevant procedures are described in many textbooks and papers on Bayesian statistics. In general, as your prior information increases, the confidence interval on a new observation gets narrower, other things being equal. For the full story, with minimal mathematics, check out the ebook Data, Uncertainty and Inference, available for free here.

1Some Bayesian statisticians would prefer the term "credible interval."
2This calculator might fail when k*N is very large.
3See A Compendium of Common Probability Distributions.

Nicholson, B. J., 1985, "On the F-Distribution for Calculating Bayes Credible Intervals for Fraction Nonconforming", IEEE Transactions on Reliability, vol. R-34 (3), pp. 227-228.

N.B. The calculator on this webpage does not use the F-distribution; it computes the inverse Beta directly.

Jaynes, E. T., 1976, "Confidence Intervals vs. Bayesian Intervals", in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, W. L. Harper and C. A. Hooker (eds.), D. Reidel, Dordrecht, pg. 175. [8Mb (.pdf) download]

This calculator enjoys worldwide usage.

Is your country listed?

Comments and suggestions

Made with a Mac