Raging Sloth Tech

PNG Optimizer 2 Multithreading



Well technically I plan on doing a parallel program using processes but whatever...



Page 1: thinking about parallelization. Page 2 Page 3



OK so now is as good a time as any to resurrect a long dead project. In this first instalment I’m just going to start with some basic concepts about how we could do our multithread... OK I’m actually going to use processes and not threads but I misnamed this tutorial because I’m pretty sure most people really mean parallel programming when they say multithreading. On a basic level the two are pretty much the same thing but have some important differences. The most important difference for me is that using multiple processes is way faster in my language of choice python (because of the Global Interpreter Lock) as well I find multiple processes to be simpler using the fork() function which we will talk about later.

So what are we going to talk about now? Well we’re going to talk about how to make a program parallel (ie execute more than one thing at the same time using different processors.) Just about everyone has at least a dual core these days and if you were using my png optimizer you’d notice that it takes quite a while on large pngs and it would sure be nice if we could do two at the same time. So we’re going to jump right into the concept of parallelism.

When we use a computer we give it tasks and it executes them the way we tell them. This is similar to planning a task in the real world but with some important distinctions that I won’t mention at this point. So just like in the real world one must plan out a task if they are to make optimum use of their resources. Take for example the picture I’ve put with this article. Imagine you have 100 people to build that building, you’d probably go a lot quicker than you would if you only had one. You could start with them all digging the foundation with shovels and then all carrying materials to the site and then working in different parts of the large structure. Then think about the statues in front of the building. It wouldn’t be nearly as easy to have 100 people working productively on them at least not after the materials were in place for a single artist or two to go to work on each statue. This is the same sort of situation inside a computer program meant to do more than one thing at a time. You need to figure out a way for each processor (or core) to be productive otherwise you’re wasting time. Also consider digging a building’s foundation with 100 people and one shovel. Those 100 people might spend all day arguing over who gets the shovel and not actually get any work done. In the same way there are scenarios where moving to a parallel architecture can slow your program down a great deal. If you were to write an application for a cluster of computers connected over a LAN and you wanted to distribute a relatively simple task you might take more time to send the problem set to other computers than you would to just do all the calculations on one alone.

That now leads us into the actual problem we are trying to solve. We have a bunch of image files spread out over a storage device and we want to go one by one through them and make them smaller. There are two basic approaches I can think of:

  1. Find a way to use multiple processors (from now on I’m just going to say processors and cores is implied) on a per image basis to speed up the time for each image to be shrunk.
  2. Have more than one processor running the same programs we previously ran but on different images.

The first approach would work if it is possible but it would require a tonne of knowledge on pngs and most likely would be a situation similar to the one shovel analogy where processes are constantly waiting for one another.

The second seems more reasonable (and a lot easier to do.) We just need to spawn multiple instances of the programs we are already using to process more than one image at a time. It is expected that there are a lot of images (otherwise why would you bother with the script?) so overall if we can divide them up between multiple processors we should make efficient use of the processors. Which comes to the next question. How should we divide them up?

We’ll talk about that in the next post.

Page 1: thinking about parallelization. Page 2 Page 3