Parallel programs execute multiple instructions simultaneously to increase their overall processing throughput compared to sequential programs that only execute a single series of instruction. In this video, learn about the advantages of parallel programming and its limitations such as critical path.
- Let's start by looking at what parallel computing means and why it's useful. Why it's worth the extra effort to write parallel code. A computer program is just a list of instructions that tells a computer what to do like the steps in a recipe that tell me what to do when I'm cooking. Like a computer, I simply follow those instructions to execute the program. So, to execute the program or recipe to make a salad, I'll start by chopping some lettuce and putting it on a place. Then I'll slice up a cucumber and add it. Next, I'll slice and add a few chunks of tomato. I'll try not to cry while I slice the onion. And finally, I add the dressing. Done. As a single cook working alone in the kitchen, I'm a single processor executing this program in a sequential manner. The program is broken down into a sequence of discreet instructions that I execute one after another. And I can only execute one instruction at any given moment. There's no overlap between them. This type of serial or sequential programming is how software has traditionally been written, and it's how new programmers are usually taught to code, because it's easy to understand, but it has its limitations. The time it takes for a sequential program to run is limited by the speed of the processor and how fast it can execute that series of instructions. I'll slice, and chop ingredients as fast as I can, but there's a limit to how quickly I can complete all of those tasks by myself. Each step takes some amount of time and in total, it takes me about three minutes to execute this program and make a salad. That's my personal speed record, and I can't make a salad any faster than that without help. - That's my cue. Two cooks in the kitchen represent a system with multiple processors. Now that we can break down the salad recipe and execute some of those steps in parallel. - While I chop the lettuce, - I'll slice the cucumber. - And when I'm done chopping lettuce, I'll slice the tomatoes. - And I'll chop the onion. - And finally, I'll add some dressing. - Hold on. Now it's ready. - Finally, the dressing. - Working together, we broke the recipe into independent parts that can be executed simultaneously by different processors. While I was slicing cucumbers and onions, Barron was chopping lettuce and tomatoes. That final step of adding dressing was dependent on all of the previous steps being done. So we had to coordinate with each other for that step. By working together in parallel, it only took us two minutes to make the salad which is faster than the three minutes it took Barron to do it alone. Adding a second cook in the kitchen doesn't necessarily mean we'll make the salad twice as fast, because having extra cooks in the kitchen adds complexity. We have to spend extra effort to communicate with each other to coordinate our actions. - And, there might be times when one of us has to wait for the other cook to finish a certain step before we continue on. Those coordination challenges are part of what make writing parallel programs harder than simple sequential programs. But, that extra work can be worth the effort, because when done right, parallel execution increases the overall throughput of a program enabling us to break down large tasks to accomplish them faster, or to accomplish more tasks in a given amount of time. Some computing problems are so large or complex that it's not practical or even possible to solve them with a single computer. Web search engines that process millions of transactions every second are only possible thanks to parallel computing. - In many industries, the time saved using parallel computing also leads to saving money. The advantages of being able to solve a problem faster often outweighs the cost of investing in parallel computing hardware.
- Parallel computing architectures
- Shared vs. distributed memory
- Thread vs. process
- Execution scheduling
- The thread lifecycle in C++
- Mutual exclusion
- Locking in recursive and shared mutexes
- Acquiring a lock on a mutex with a try lock
- Resolving deadlock and livelock conditions