You are here

Why a Parallel and Distributed Computing Curriculum?


A short history of computing:

In the beginning, there was the von Neumann architecture. The first digital computers comprised a functional unit (the processor) that communicated with a memory unit. Humans interfaced with a computer using highly artificial “languages.” Humans seldom interfaced with more than one computer at a time, and computers never interfaced with one another. Much of the evolution of digital computers in the roughly seven decades of their existence has brought technical improvements: functional units and memories have become dramatically faster; languages have become dramatically more congenial to the human user. In the earliest days, it seemed as though one could speed up digital computers almost without limit by improving technology: in rapid succession, vacuum tubes gave way to solid-state devices, and these were in turn replaced by integrated circuits --- notably, by using Very Large Scale Integrated Circuit Technology (VLSI). One could speed up VLSI circuits impressively by shrinking “feature sizes” within the circuits and by increasing clock rates. Despite these impressive improvements, one could see hints of “handwriting on the wall”: The fastest integrated circuits were “hot,” presaging that power-related issues (e.g., heat dissipation) would become significant before too long; shrinking feature sizes would ultimately run up against the immutable sizes of atoms. As early as the 1960s, visionaries working along one branch of digital computers’ evolutionary tree began envisioning an alternative road toward faster digital computers --- the replication of computer components and the development of tools that allowed multiple components to cooperate in the activity of what came to be called digital computing.

The first digital computers that deviated from the von Neumann architecture can be viewed as hydras (in analogy with the mythical beast): they were essentially von-Neumann-esque computers that had multiple processors. This development enabled faster computing - several instructions could be executed simultaneously - but they also forced the human user (by now called the programmer) to pay attention to coordination among the processors.

An important offspring of the hydra-like shared-memory computers had multiple memory boxes in addition to multiple processors. For efficiency, certain processors had preferential (in terms of speed) access to certain memory boxes - which introduced locality to the growing list of concerns the programmer had to deal with. Additionally, since each processor-memory box pair could function as an independent von Neumann computer, the programmer now had to orchestrate communication among the computers - which “talked” to one another across an interconnection network.

It was a short conceptual leap from the preceding “multiple computers in a box” computing platform to clusters whose computers resided “close” to one another and intercommunicated over a local area network (LAN). Among the added concerns arising from the evolution of clusters was the need to account for the greater variability in the latency of inter-computer communications. So-called “parallel” computing was beginning to sport many of the characteristics of distributed computing, wherein computers share no physical proximity at all.

Perhaps the ultimate step in this evolution has been the development, under a variety of names, of Internet-based collaborative computing, wherein geographically dispersed (multi-)computers intercommunicate over the Internet in order to cooperatively solve individual computing problems. Issues such as trust and temporal predictability now join the panoply of other concerns that a programmer must deal with.

Into all of these advances, architects have mixed detailed technical concepts such as multithreading, pipelining, superscalar issue, and short-vector instructions. All of this heterogeneous parallelism is now wrapped into commonly encountered computing platforms - in addition to the growing use of vector-threaded co-processors for graphics and scientific computing.

Programming languages have tended to follow an evolutionary path not unlike that of hardware. There have been many attempts to create languages that support abstract models of parallelism, or that correlate with specific parallel architectures, but most have met with only limited success. Even so, popular languages have gradually moved to incorporate parallelism, and languages that focus on various modalities of parallelism have gained a modicum of popularity, so that today it is difficult to ignore parallel computing in even the core of a CS or CE undergraduate programming curriculum. Indeed, we propose that it is a disservice to students not to build a substantial dose of parallel computing into this core.

In the past, it was possible to relegate issues regarding parallelism - such as coordination and locality - to advanced courses that treat subjects such as operating systems, databases, and high performance computing: the issues could safely be ignored in the first years of a computing curriculum. But current-day changes in architecture are driving advances in languages that necessitate new problem solving skills and knowledge of parallel and distributed processing algorithms at even the earliest stages of an undergraduate career. This work is our response to these changes.


What should every (computer science/engineering) student know about computing?

It has been decades since it was “easy” to supply undergraduates with everything that they need to know about computing as they venture forth into the workforce. This challenge has become ever more daunting with each successive stage of the evolution described in the preceding section. In addition to enabling undergraduates to understand the fundamentals of “von Neumann computing,” we must now prepare them for the very dynamic world of parallel and distributed computing.

This curriculum proposal seeks to address this challenge in a manner that is flexible and broad, always allowing for local variations in emphasis. The field of PDC is changing too rapidly for any proposal with any rigidity to remain valuable to the community for a useful length of time. But it is essential that curricula begin the process of incorporating parallel thinking into the core courses. Thus, the proposal attempts to identify basic concepts and learning goals that are likely to retain their relevance for the foreseeable future.

We see PDC topics as being most appropriately sprinkled throughout a CS/CE curriculum in a way that enhances what is already taught and that melds parallel and distributed computing with existing material in whatever ways are most natural for a given institution/program. While advocating the thesis that relegating PDC subjects to a separate course is not the best means to shift the mindset of students away from purely sequential thinking, we recognize that the separate-course route may work better for some programs.