CSCI 338

Parallel Processing

Home | Lectures | Programming Assignments | Links | CS@Williams

Program 3: MPI

Assigned Tuesday, February 25, 2025
Final Due Date Thursday, March 6

Overview

This assignment has two parts. The first part gives you some experience parallelizing an existing piece of code using MPI. The second part has you planning how you will parallelize your graph algorithms using MPI.

How the Honor Code Applies to This Assignment

This is a group-eligible assignemnt. Meaning, you should work with your class-approved team of students to produce your assigments, evaluation, and final reports. You are also encouraged to to ask non-team members questions of clarification, language syntax, and error message interpretation, but are not permitted to view/share each others code or written design notes for part1. Please see the syllabus for more details.

Part One: MPI

The first part of this assignment has you gaining familiarity with parallelizing an algorithm using MPI. A description of the algorithm you are parallelizing can be found in this Project Description. The starter code will be provided via gitlab.

Part Two: Planning MPI Implementations of Graph Algorithms

In this part of the assignment, you're going to come up with a plan for parallelizing your graph algorithms. This is just a design and is subject to change when you implement the MPI versions in the next programming assignment. Your deliverable will be a PDF document where you describe for each graph algorithm the following information:

For each section of code you plan to parallelize, specify:

  1. How you will divide up the work to different processes. The granularity will likely be different from how parallelization is currently done with OMP.
  2. What data will be distributed to each process and how you plan to perform that distribution (i.e., MPI function)
  3. What new data structures will need to be created or copied so processes can work on private data
  4. What data needs to be communicated between processes and how you plan to perform that communication (i.e., MPI function)
  5. What synchronization needs to be happen between processes and how you plan to perform that communication (i.e., MPI function)

You should also provide some brief justification for your overall parallelization design choices.

Evaluation

Each part of the assigment will be worth half of your grade. Part 1 will be based on the correctness and brief write-up of your MPI parallelization. Part 2 will be based on the design document you have created, in particular, the thought process behind your choices.

Submitting Your Work

Please submit your code, write-up, and Makefile for part 1 via gitlab. For part 2, please submit your PDF of your design writeup via gitlab.