Thinking in Pictures
General-purpose computing on graphics processing units — a brief history

Thinking in Pictures
In the mid-1970s, filmmaker George Lucas sought to capture his vision for a space opera called The Star Wars on motion picture film. The difficulty was that the technology did not yet exist to create the vast special effect sequences that were required to tell an ancient story set in a galaxy far, far away. To solve this problem, Lucas acquired space in a vacant warehouse located next to the Van Nuys Airport near Los Angeles, CA and assembled an interdisciplinary team of special effects artists, model makers and engineers that later became known as Industrial Light and Magic (ILM).

Throughout many trials and errors, ILM settled on the process of optical compositing to produce the required images. In this process, two or more objects are filmed separately and are later combined to create a single composite image. While this approach had been used in film for over 70 years, The Star Wars called for the position of the camera to be moved during the scene; a feat that required the exact path of the camera be duplicated during each of the individual shots. If the motions of the camera were not controlled precisely, the objects in the composite scene would appear to jitter and jerk relative to each other during the sequence.

An ILM team lead by John Dykstra developed a digital computer-controlled robotic camera system later known as the “Dykstraflex” that provided precise camera motion control in addition to programmable lens focus, film drive and shutter exposure time. The programmable Dykstraflex allowed a scene to be filmed multiple times running the same motion program to synchronize the performance of different objects, models and actors. The resulting composite images appeared as smooth, sweeping sequences filmed from an impossible camera floating hundreds of miles above a planet or in the deepest reaches of space.

While ILM was assembling custom circuitry to develop the Dykstraflex, Intel was developing a mass-produced general-purpose 16-bit microprocessor named the “8086” that was to be an upgrade to its successful “8080,” which was targeted for digital calculators. The 8086 was designed to interpret assembly language programming and also contained instructions to run popular languages such as Pascal. In 1979, a modified version of this chip containing an 8-bit external data bus known as the “8088” became the central processing unit (CPU) of the original IBM PC. The fundamental design of the 8086 has been iterated over a long line of “x86” processors and is still in use today.

Intel introduced the “80286” in 1982 and, although its performance was more than double that of its 8086 predecessor, Intel decided to create a separate floating point unit (FPU) chip dubbed the “80287” to serve as a math coprocessor. The FPU contained specialized instructions optimized to perform mathematical calculations on real-valued numbers. These calculations were completed separately from the CPU instructions and combined subsequently to produce an overall performance that was faster than either chip individually.

Soon afterward, in 1983, Intel produced the “82720” Graphics Display Controller that was optimized to calculate the display patterns of primitive geometric shapes and bitmaps. Although not directly influenced by the design of the Dykstraflex, the use of separate processors that are synchronized to a common system clock produced a composite result that could easily be attributed as magic.

The development of technology that increased the density of integrated circuit components permitted the design of the Intel “80486” to include the CPU and FPU on the same chip in 1989, and two decades later in 2008, the Intel Core i3/i5/i7 began the integration of all three CPU, FPU and graphics processing unit (GPU) processors onto the same die. In addition to Intel’s integrated GPUs, chip manufacturers NVIDIA and AMD (formerly ATI Technologies) currently lead the development of GPU technology with advancements in multi-core hardware and algorithms.

In 2006, Ian Buck published his Ph.D. dissertation while at Stanford University wherein he demonstrated the over 60 GFLOPS peak performance of then available GPU hardware could be harnessed to solve complex computations in parallel using an approach known as “Stream Processing.” In addition to the reliance of embedded algorithms for graphics calculations, Buck developed an extension of ANSI C known as the Brook Stream Language that allowed programmers to enlist the computational power of the GPU to the solution of problems involving large data sets. Upon graduation, he was hired by NVIDIA where the concepts presented in Brook were realized in a 2008 product named Compute Unified Device Architecture (CUDA). Various computer languages have been extended to run algorithms on GPU processors using CUDA, an approach known collectively as general-purpose computing on graphics processing units (GPGPU).

Continuing Innovation
Many computationally-intensive problems can benefit from the parallel architecture of modern multi-core GPU processors including actual image processing for diagnostics such as telescopes, medical imaging and flow visualization. Perhaps the most exciting are those hard problems that can be abstracted visually, such as evaluating error values within multidimensional parameter space, traversing the nodes of a directed network tree, as well as spatial clustering and agent tracking in SWARM applications. ILM continues to innovate in the area of digital optical effects along with its spin-off Pixar Animation Studios. The revenue generated by the gaming and motion picture industries is driving the advancement of GPU processors, while scientific advances enable ever-increasing speed and capabilities in a compositing process we call innovation.

1. “Graphics Processing Unit”, Wikipedia article:
2. “GPGPU”, Wikipedia article:
3. GPGPU News and Resources:
4. Ian Buck, “Stream Computing on Graphics Hardware”, Ph.D. Dissertation (2006):
5. Ian Buck, “The History of CUDA”, Video Interview (2008):
6. “Industrial Light and Magic & NVIDIA Quadro”, Promotional Video (2010):

William Weaver is an associate professor in the Department of Integrated Science, Business and Technology at La Salle University. He may be contacted at