Copyright © National Academy of Sciences. All rights reserved.
The Future of Computing Performance:   Game Over or Next Level?
2  THE FUTURE OF COMPUTING PERFORMANCE
Early in the 21st century, improvements in single-processor perfor-
mance slowed, as measured in instructions executed per second, and such 
performance now improves at a very modest pace, if at all. This abrupt 
shift is due to fundamental limits in the power efficiency of complemen-
tary metal oxide semiconductor integrated circuits (used in virtually all 
computer chips today) and apparent limits in the efficiencies that can be 
exploited in single-processor architectures. Reductions in transistor size 
continue apace, and so more transistors can still be packed onto chips, 
albeit without the speedups seen in the past. As a result, the computer-
hardware industry has commenced building chips with multiple proces-
sors. Current chips range from several complex processors to hundreds 
of  simpler  processors,  and  future  generations  will  keep  adding  more. 
Unfortunately, that change in hardware requires a concomitant change in 
the software programming model. To use chip multiprocessors, applica-
tions must use a parallel programming model, which divides a program into 
parts that are then executed in parallel on distinct processors. However, 
much software today is written according to a sequential programming 
model,  and  applications  written  this  way  cannot  easily  be sped  up by 
using parallel processors. 
The only foreseeable way to  continue advancing performance is  to 
match parallel hardware with parallel software and ensure that the new 
software is portable across generations of parallel hardware. There has 
been  genuine  progress  on  the  software  front  in  specific  fields,  such  as 
some scientific applications and commercial searching and transactional 
applications. Heroic programmers can exploit vast amounts of parallel-
ism, domain-specific languages flourish, and powerful abstractions hide 
complexity.  However,  none  of  those  developments  comes  close  to  the 
ubiquitous support for programming parallel hardware that is required 
to ensure that IT’s effect on society over the next two decades will be as 
stunning as it has been over the last half-century. 
For  those  reasons,  the  Committee  on  Sustaining  Growth  in  Com-
puting Performance  recommends that our nation place a much greater 
emphasis on IT and computer-science research and development focused 
on improvements and innovations in parallel processing, and on making 
the transition to computing centered on parallelism. The following should 
have high priority:
·  Algorithms that can exploit parallel processing;
·  New computing “stacks” (applications, programming languages, 
compilers,  runtime/virtual  machines,  operating  systems,  and 
architectures)  that  execute  parallel  rather  than  sequential  pro-
grams  and  that  effectively  manage  software  parallelism,  hard-
ware parallelism, power, memory, and other resources;