Probability is the branch of mathematics that studies the possible outcomes of given events together with the outcomes' relative likelihoods and distributions. In common usage, the word "probability" is used to mean the chance that a particular event (or set of events) will occur expressed on a linear scale from 0 (impossibility) to 1 (certainty), also expressed as a percentage between 0 and 100%. The analysis of events governed by probability is called statistics.

There are several competing interpretations of the actual "meaning" of probabilities. Frequentists view probability simply as a measure of the frequency of outcomes (the more conventional interpretation), while Bayesians treat probability more subjectively as a statistical procedure that endeavors to estimate parameters of an underlying distribution based on the observed distribution.

A properly normalized function that assigns a probability "density" to each possible outcome within some interval is called a probability density function (or probability distribution function), and its cumulative value (integral for a continuous distribution or sum for a discrete distribution) is called a distribution function (or cumulative distribution function).

A variate is defined as the set of all random variables that obey a given probabilistic law. It is common practice to denote a variate with a capital letter (most commonly X). The set of all values that X can take is then called the range, denoted R_X (Evans et al. 2000, p. 5). Specific elements in the range of X are called quantiles and denoted x, and the probability that a variate X assumes the element x is denoted P(X=x).

Probabilities are defined to obey certain assumptions, called the probability axioms. Let a sample space contain the union ( union ) of all possible events E_i, so

 S=( union _(i=1)^NE_i),

and let E and F denote subsets of S. Further, let F^'=not-F be the complement of F, so that

 F union F^'=S.

Then the set E can be written as

 E=E intersection S=E intersection (F union F^')=(E intersection F) union (E intersection F^'),

where  intersection denotes the intersection. Then

P(E)=P(E intersection F)+P(E intersection F^')-P[(E intersection F) intersection (E intersection F^')]
=P(E intersection F)+P(E intersection F^')-P[(F intersection F^') intersection (E intersection E)]
=P(E intersection F)+P(E intersection F^')-P(emptyset intersection E)
=P(E intersection F)+P(E intersection F^')-P(emptyset)
=P(E intersection F)+P(E intersection F^'),

where emptyset is the empty set.

Let P(E|F) denote the conditional probability of E given that F has already occurred, then

P(A intersection B)=P(A)P(B|A)
P(A^' intersection B)=P(A^')P(B|A^')
P(E|F)=(P(E intersection F))/(P(F)).

The relationship

 P(A intersection B)=P(A)P(B)

holds if A and B are independent events. A very important result states that

 P(E union F)=P(E)+P(F)-P(E intersection F),

which can be generalized to

 P( union _(i=1)^nA_i)=sum_(i)P(A_i)-sum^'_(ij)P(A_i intersection A_j)+sum^('')_(ijk)P(A_i intersection A_j intersection A_k)-...+(-1)^(n-1)P( intersection _(i=1)^nA_i).

See also

Bayes' Theorem, Conditional Probability, Countable Additivity Probability Axiom, Distribution Function, Independent Statistics, Likelihood, Probability Axioms, Probability Density Function, Probability Inequality, Statistical Distribution, Statistics, Uniform Distribution Explore this topic in the MathWorld classroom

Explore with Wolfram|Alpha


Evans, M.; Hastings, N.; and Peacock, B. Statistical Distributions, 3rd ed. New York: Wiley, 2000.Everitt, B. Chance Rules: An Informal Guide to Probability, Risk, and Statistics. Copernicus, 1999.Goldberg, S. Probability: An Introduction. New York: Dover, 1986.Keynes, J. M. A Treatise on Probability. London: Macmillan, 1921.Mises, R. von Mathematical Theory of Probability and Statistics. New York: Academic Press, 1964.Mises, R. von Probability, Statistics, and Truth, 2nd rev. English ed. New York: Dover, 1981.Mosteller, F. Fifty Challenging Problems in Probability with Solutions. New York: Dover, 1987.Mosteller, F.; Rourke, R. E. K.; and Thomas, G. B. Probability: A First Course, 2nd ed. Reading, MA: Addison-Wesley, 1970.Nahin, P. J. Duelling Idiots and Other Probability Puzzlers. Princeton, NJ: Princeton University Press, 2000.Neyman, J. First Course in Probability and Statistics. New York: Holt, 1950.Rényi, A. Foundations of Probability. San Francisco, CA: Holden-Day, 1970.Ross, S. M. A First Course in Probability, 5th ed. Englewood Cliffs, NJ: Prentice-Hall, 1997.Ross, S. M. Introduction to Probability and Statistics for Engineers and Scientists. New York: Wiley, 1987.Ross, S. M. Applied Probability Models with Optimization Applications. New York: Dover, 1992.Ross, S. M. Introduction to Probability Models, 6th ed. New York: Academic Press, 1997.Székely, G. J. Paradoxes in Probability Theory and Mathematical Statistics, rev. ed. Dordrecht, Netherlands: Reidel, 1986.Todhunter, I. A History of the Mathematical Theory of Probability from the Time of Pascal to that of Laplace. New York: Chelsea, 1949.Weaver, W. Lady Luck: The Theory of Probability. New York: Dover, 1963.

Referenced on Wolfram|Alpha


Cite this as:

Weisstein, Eric W. "Probability." From MathWorld--A Wolfram Web Resource.

Subject classifications