TOPICS
Search

Normal Sum Distribution


Amazingly, the distribution of a sum of two normally distributed independent variates X and Y with means and variances (mu_x,sigma_x^2) and (mu_y,sigma_y^2), respectively is another normal distribution

 P_(X+Y)(u)=1/(sqrt(2pi(sigma_x^2+sigma_y^2)))e^(-[u-(mu_x+mu_y)]^2/[2(sigma_x^2+sigma_y^2)]),
(1)

which has mean

 mu_(X+Y)=mu_x+mu_y
(2)

and variance

 sigma_(X+Y)^2=sigma_x^2+sigma_y^2.
(3)

By induction, analogous results hold for the sum of n normally distributed variates.

An alternate derivation proceeds by noting that

P_n(x)=F_t^(-1){[phi(t)]^n}(x)
(4)
=(e^(-(x-nmu)^2/(2nsigma^2)))/(sqrt(2pinsigma^2)),
(5)

where phi(t) is the characteristic function and F_t^(-1)[f](x) is the inverse Fourier transform, taken with parameters a=b=1.

More generally, if x is normally distributed with mean mu and variance sigma^2, then a linear function of x,

 y=ax+b,
(6)

is also normally distributed. The new distribution has mean amu+b and variance a^2sigma^2, as can be derived using the moment-generating function

M(t)=<e^(t(ax+b))>
(7)
=e^(tb)<e^(atx)>
(8)
=e^(tb)e^(muat+sigma^2(at)^2/2)
(9)
=e^(tb+muat+sigma^2a^2t^2/2)
(10)
=e^((b+amu)t+a^2sigma^2t^2/2),
(11)

which is of the standard form with

mu^'=b+mua
(12)
sigma^('2)=a^2sigma^2.
(13)

For a weighted sum of independent variables

 y=sum_(i=1)^na_ix_i,
(14)

the expectation is given by

M(t)=<e^(yt)>
(15)
=<exp(tsum_(i=1)^(n)a_ix_i)>
(16)
=<e^(a_1tx_1)e^(a_2tx_2)...e^(a_ntx_n)>
(17)
=product_(i=1)^(n)<e^(a_itx_i)>
(18)
=product_(i=1)^(n)exp(a_imu_it+1/2a_i^2sigma_i^2t^2).
(19)

Setting this equal to

 exp(mut+1/2sigma^2t^2)
(20)

gives

mu=sum_(i=1)^(n)a_imu_i
(21)
sigma^2=sum_(i=1)^(n)a_i^2sigma_i^2.
(22)

Therefore, the mean and variance of the weighted sums of n random variables are their weighted sums.

If x_i are independent and normally distributed with mean 0 and variance sigma^2, define

 y_i=sum_(j)c_(ij)x_j,
(23)

where c obeys the orthogonality condition

 c_(ik)c_(jk)=delta_(ij),
(24)

with delta_(ij) the Kronecker delta. Then y_i are also independent and normally distributed with mean 0 and variance sigma^2.

Cramer showed the converse of this result in 1936, namely that if X and Y are independent variates and X+Y has a normal distribution, then both X and Y must be normal. This result is known as Cramer's theorem.


See also

Cramer's Theorem, Normal Difference Distribution, Normal Distribution, Normal Product Distribution, Normal Ratio Distribution

Explore with Wolfram|Alpha

Cite this as:

Weisstein, Eric W. "Normal Sum Distribution." From MathWorld--A Wolfram Web Resource. https://mathworld.wolfram.com/NormalSumDistribution.html

Subject classifications