CSCI 338
Parallel Processing
Home | Lectures | Programming Assignments | Links | CS@Williams
Program 5: Pthreads
Assigned | Thursday, April 10, 2025 |
---|---|
Final Due Date | Thursday, April 17 |
Overview
In this assignment, you will write a parallel shared memory program using pthreads that counts the number of pixels in an image with each rgb color saturation value. You will also revise one of the algorithms you implemented with MPI to use pthreads.
How the Honor Code Applies to This Assignment
This is a group-eligible assignment. Meaning, you should work with your class-approved team of students to produce your assigments, evaluation, and final reports. You are also encouraged to to ask non-team members questions of clarification, language syntax, and error message interpretation, but are not permitted to view/share each others code or written design notes for part1. Please see the syllabus for more details.
Part One: Counting rgb saturation values
The first part of this assignment has you gaining familiarity with parallelizing an algorithm using pthreads. A description of the algorithm you are parallelizing can be found in this Project Description. The starter code will be provided via gitlab.
Part Two: Rewriting Your Graph Algorithms
For this assignment, you will rewrite one of the graph algorithms you implemented using MPI to use pthreads instead. To do this, you'll want to look for places where OpenMP pragmas are used to parallelize the code currently.
Some of your algorithms may be using parallel versions of library code written using OpenMP. If this is the case, you'll need to either use a serial version of that library code or implement a version using pthreads. Please reach out to me for help with this issue.
Enabling Pthreads and Disabling OpenMP
To get started, you will want to remove
the -fopenmp
flag in the compilation line in
CMakeLists.txt at the top level and rebuild your code. You will
also need to add the pthreads library to be included in the
CMakeLists.txt files. Once you do this, compiler errors should
help you figure out what changes need to be made.
One way of doing this is to take the codebase I created for the MPI project and change the top-level CMakeLists.txt to have this find_package(Threads REQUIRED)
line instead of the one with MPI::MPI_CXX and to change the target_link_libraries()
command to be target_link_libraries(${bench_name} roaring Threads::Threads)
. In the examples
directory, you'll need to change the target_link_libraries
call to use Threads::Threads
instead of MPI::MPI_CXX
.
Please reach out to me with questions as you work on getting your code to compile with pthreads.
Evaluating Your Graph Algorithms
You are not expected to come up with the abolutely most efficient pthread version of your algorithms. However, you are expected to make smart design decisions and the runtimes shouldn't be too far off from those for OpenMP unless there is a very good reason that you can explain.
For evaluation of the performance of your code, please collect timing information for your code when it is run with 1 to 16 threads using both pthreads and OpenMP. If you find that more threads result in diminished performance, investigate the cause of this reduced performance and try to fix the bottleneck if possible.
Evaluation
Each part of the assigment will be worth half of your grade. Part 1 will be based on the correctness and brief write-up of your pthread parallelization. Part 2 will be based on the design of your coding approach as well as your presentation of your design choices and performance results. When presenting your approach, you will want to explain why you made the choices regarding computational grouping and communication. You'll also want to explain any performance losses you observe both across platforms and when your number of pthreads increases.
Submitting Your Work
Please submit your code and your presentation slides as PDF via gitlab.