TOPICS
Search

Parallel Computing


Parallel computing is the execution of a computer program utilizing multiple computer processors (CPU) concurrently instead of using one processor exclusively. Let T(n,1) be the run-time of the fastest known sequential algorithm and let T(n,p) be the run-time of the parallel algorithm executed on p processors, where n is the size of the input.

The speedup is then defined as

 S(p)=(T(n,1))/(T(n,p)),

i.e., the ratio of the sequential execution time to the parallel execution time. Ideally, one would like S(p)=p, which is called perfect speedup, although in practice this is rarely achieved. (There are some situations where superlinear speedup is achieved due to memory hierarchy effects.)

Another metric to measure the performance of a parallel algorithm is efficiency, E(p), defined as

 E(p)=(T(n,1))/(pT(n,p))=(S(p))/p.

One can use speedup and efficiency to analyze algorithms either theoretically using asymptotic run-time complexity or in practice by measuring the time of program execution. When p is fixed, Speedup and Efficiency are equivalent measures, differing only by the constant factor p.


See also

Queuing Theory

This entry contributed by Jonathan Bentz

Explore with Wolfram|Alpha

References

Scott, L. R.; Clark, T.; and Bagheri, B. Scientific Parallel Computing. Princeton, NJ: Princeton University Press, 2005.

Referenced on Wolfram|Alpha

Parallel Computing

Cite this as:

Bentz, Jonathan. "Parallel Computing." From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein. https://mathworld.wolfram.com/ParallelComputing.html

Subject classifications