Introduction to Parallel Processing
This course is intended to introduce graduate students to the field of modern computer architecture design stressing speedup and parallel processing techniques. The course is a comprehensive study of parallel processing techniques and their applications from basic concepts to state-of-the-art parallel computer systems. Topics to be covered in this course include the following: First, the need for parallel processing and the limitations of uniprocessors are introduced. Next, a substantial overview and basic concepts of parallel processing and their impact on computer architecture are introduced. This will include major parallel processing paradigms such as pipelining, superscalar, superpipeline, vector processing, multithreading, multi-core, multiprocessing, multicomputing, and massively parallel processing. We then address the architectural support for parallel processing such as 1) parallel memory organization and design; 2) cache design; 3) cache coherence strategies; 4) shared-memory versus distributed memory systems; 5) symmetric multiprocessors (SMPs), distributed-shared memory (DSM) multiprocessors, multicomputers, and distributed systems; 6) processor design (RISC, superscalar, superpipeline, multithreading, multi-core processors, and speculative computing designs); 7) communication subsystem; 8) computer networks, routing algorithms and protocols, flow control, reliable communication; 9) emerging technologies (such as optical computing, optical interconnection networks, optical memories); 10) parallel algorithm design and parallel programming and software requirements,; and 11) case studies of several commercial parallel computers from the TOP500 list of supercomputers.
- Homework: 3-5 assignments
- Project: 1 term paper
- Exams: 2 midterm exams
- Typical grading policy: 50% midterms, 20% project, 25% homework, 5% participation