The Mystery of the Massively Parallel Processor

Posted on Fri, July 2, 2010

Several months ago, according to statistics that measure the public’s access to the museum’s collections via our web site, the one artifact on exhibit at the Udvar-Hazy Center that our online users visited the most was….the Massively Parallel Processor.

Massively Parallel Processor

Massively Parallel Processor


The what? The Massively Parallel Processor, or MPP, is a pair of large blue boxes crammed full of circuit boards, tucked away in the northwest corner of the McDonnell Space hangar at the Udvar-Hazy Center. It is admittedly not much to look at, compared to, say, the Enola Gay, which is currently the most queried artifact online. While the web and new media people try to figure out if the MPP’s exalted online status was an anomaly or not, let me explain what the MPP is. Perhaps after I describe it, you may feel that it deserves more recognition. We all know how fast computer technology has advanced in the past few decades—many of us carry hand-held devices that have more computer power in them than the supercomputers of an early era, never mind the computer power of the Apollo Guidance Computer that took astronauts to the Moon between 1968 and 1972. But in spite of those advances, the basic design of computers has not changed that much. Nearly all of them follow a design first sketched by the Hungarian mathematician John von Neumann, in a report he wrote for the U.S. Army in 1945. In that report, he argued that an optimal computer would have a single processor, which performed basic operations on a single piece of data at a time, which it transferred to and from a high-speed memory. He argued that such a design was the only way that human beings could manage the complexity of computer design, especially the complexity of programming them. Over the succeeding decades, computer circuits have gotten much faster, and memories have gotten much larger. And of course computers have gotten much smaller and use far less power. But the basic “von Neumann architecture,” with its single instruction stream and single channel to memory, has remained.

John von Neumann

John von Neumann Credit: United States Department of Energy


The Massively Parallel Processor was an experimental machine intended to break what has been called the “von Neumann bottleneck,” by having a program manipulate not one but thousands of pieces of data at a time—in this case, over 16,000 memory locations, each with its own associated processor. That was especially important for computers that processed images, which consist of thousands of picture elements, or “pixels,” each of which needs to be manipulated, but each of which also bears a close relationship to its immediate neighbors. The MPP was built for the Goddard Space Flight Center in Greenbelt, Maryland, by the Goodyear Aerospace Corporation of Akron, Ohio—a division of Goodyear well-known for its lighter-than-air craft, but a company that also was a pioneer in supplying advanced computers to military and aerospace customers. It was designed in the late 1970s, delivered to Goddard in 1983, and operated into the 1990s. Was the MPP a success? It worked well, and it demonstrated that a parallel architecture was feasible, and that it was indeed possible to program it. It did not lead to a line of “non-von-Neumann” computers. The laptops and hand-held devices we use employ advanced versions of the classic architecture. But in many current high-performance computer installations, such as those used by Google to search the Internet, parallel architectures are heavily used. Perhaps the large number of Internet queries are coming from Google’s server farms, who are going to the National Air and Space Museum’s website to check up on their grandfather. 

Related Topics