From the course: Parallel and Concurrent Programming with Java 1

Sequential vs. parallel computing - Java Tutorial

From the course: Parallel and Concurrent Programming with Java 1

Start my 1-month free trial

Sequential vs. parallel computing

- Let's start by looking at what parallel computing means and why it's useful, why it's worth the extra effort to write parallel code. A computer program is just a list of instructions that tells a computer what to do, like the steps in a recipe that tell me what to do when I'm cooking. Like a computer, I simply follow those instructions to execute the program. So to execute the program or recipe to make a salad, I'll start by chopping some lettuce and putting it on a plate. (chopping) Then I'll slice up a cucumber and add it. (slicing) Next I'll slice and add a few chunks of tomato. (slicing) I'll try not to cry while I slice the onion. (slicing) And finally, I add the dressing. Done. As a single cook working alone in the kitchen, I'm a single processor, executing this program in a sequential manner. The program is broken down into a sequence of discrete instructions that I execute one after another, and I can only execute one instruction at any given moment. There's no overlap between them. This type of serial or sequential programming is how software has traditionally been written, and it's how new programmers are usually taught to code, because it's easy to understand, but it has its limitations. The time it takes for a sequential program to run is limited by the speed of the processor and how fast it can execute that series of instructions. I'll slice and chop ingredients as fast as I can, but there is a limit to how quickly I can complete all of those tasks by myself. Each step takes some amount of time, and in total, it takes me about three minutes to execute this program and make a salad. That's my personal speed record and I can't make a salad any faster than that without help. - That's my cue. Two cooks in the kitchen represent a system with multiple processors. Now we can break down the salad recipe and execute some of those steps in parallel. - While I chop the lettuce. - I'll slice the cucumber. - And when I'm done chopping lettuce, I'll slice the tomatoes. - And I'll chop the onion. - And finally, I'll add some dressing. - Hold on. - Now it's ready. - Finally, the dressing. - Working together, we've broke the recipe into independent parts that can be executed simultaneously by different processors. While I was slicing cucumbers and onions, Baron was chopping lettuce and tomatoes. The final step of adding dressing was dependent on all of the previous steps being done. So we had to coordinate with each other for that step. By working together in parallel, it only took us two minutes to make the salad, which is faster than the three minutes it took Baron to do it alone. Adding a second cook in the kitchen doesn't necessarily mean we'll make the salad twice as fast, because having extra cooks in the kitchen adds complexity. We have to spend extra effort to communicate with each other to coordinate our actions. - And there might be times when one of us has to wait for the other cook to finish a certain step before we continue on. Those coordination challenges are part of what make writing parallel programs harder than simple sequential programs, but that extra work can be worth the effort because when done right, parallel execution increases the overall throughput of a program, enabling us to break down large tasks to accomplish them faster or to accomplish more tasks in a given amount of time. Some computing problems are so large or complex that it's not practical or even possible to solve them with a single computer. Web search engines that process millions of transactions every second are only possible thanks to parallel computing. - In many industries, the time saved using parallel computing also leads to saving money. The advantage of being able to solve a problem faster often outweighs the cost of investing in parallel computing hardware.

Contents