Mar 062008

I found myself in a lull between storms a few days ago, so I decided to do some housecleaning in our makefile before the next code rush hit.  And therein lies a story…

We use one makefile to build all versions of PicLens for all browsers on all platforms.  It’s loosely patterned after the Google Gears project makefile – all source files are listed in one place, and make recursively invokes itself with different params to work its way through the various phases of the build process – checking dependencies, compiling source to obj, building the install bundles, etc.  We use Gnu make to run the build as it is available on all the platforms we need – and then some!

As the number of PicLens source files has grown rapidly over the past few months our build times have been getting longer and longer, and burning away a lot of our development productivity.  It was time to quit whining about it and just fix it!

One data point that bugged me about our build was CPU utilization – or lack thereof.  We all have multicore CPUs, but during the C++ compile phase the average CPU load was rarely more than one core (50% on dual core, 25% on quad).  This may be partly because gcc and msvc compilers are not aggressively multithreaded, and partly a reflection of the fact that C++ source compilation is largely disk I/O bound rather than CPU bound.

If you’ve got a lot of independent work items to do and spare CPU cores sitting idle, then you need to work on more than one item at a time.  One tantalizing prospect to improve build times was the Gnu make -j option. Make defaults to issuing one operation (invoke the compiler to compile this source file) at a time, and then waits for that to finish before issuing another operation.  With -j, you can tell make to issue multiple operations simultaneously as parallel processes. The idea is, if one instance of your compiler is only making use of one CPU core, then running two instances of the compiler to compile different source files should make use of two CPU cores, and finish compiling the two files in about the time it would take to compile just one file.

We added -j to our makefile for the source compilation phase, and with our first build test on an OSX Mac saw a nice 40% reduction in build time (dual core)! However, when we ran the build test on a Windows PC, we got a snippy little remark from make:

make: Do not specify -j or –jobs if sh.exe is not available.
make: Resetting make for single job mode.

Well!  How rude!

Consulting the Gnu make docs, I found section 5.3.1 Choosing the Shell, conveniently located right next to section 5.4 Parallel Execution. Unfortunately, the Gnu docs state right up front:

On MS-DOS, the `-j‘ option has no effect, since that system doesn’t support multi-processing.

Well that’s a bummer.  And a little confusing, since Windows has been doing multi-processing for a very long time. I chose to ignore the docs as possibly outdated (MS-DOS!  How quaint!) and pressed on.

Choosing the Shell” talks a lot about make wanting a Unix style shell, like sh.exe.  It also talks a lot about how special and delicate the shell selection process is for “MS-DOS and Windows,” including checks for the COMSPEC environment variable.

We have a basket of Unix utils built for Win32 in our toolbox, and one of them is named zsh.exe.  It looks like a Bash shell, it smells like a Bash shell, but it doesn’t fork processes like a Bash shell.  Configuring make to use zsh as its shell sent it crashing to the floor. Hmm.  Not much better with the msys shell, and don’t get me started about cygwin.

I scoured the Internet looking for clues.  Lots of articles about Gnu make and -j, but nothing about how to make it work in Windows.  Then purely by chance, I stumbled into this page at which offered a solution to a different make problem:  set make‘s SHELL environment variable to CMD.EXE, the Win32 shell, to resolve a “CreateProcess() failed” error.


Could it be that simple?

Yes! Telling make to use the Win32 shell magically enables make to fork processes and execute multiple compiler instances in parallel!

Now my question is this: Why is this setting needed at all?  cmd.exe is the default Win32 shell, has been for decades, and it’s the shell from which make is invoked.  Why doesn’t make figure this out on its own?  Having to set the SHELL variable in make seems so ridiculously redundant and deliberately obtuse.  “Hey Mr Make, I’d like you to drive my bus.  Yes, this bus.  The bus you rode in on.  The bus you’re still standing in.  Yes, that bus.”


 I’ll take the 40% faster build times and leave the drama, thanks.

Concurrency Notes

Set the number of concurrent jobs equal to or less than the number of execution cores on your system.  Trying to bake 5 cakes in 4 pans at the same time is just silly.

How many cores do you have?  In Win32, it’s reported in the NUMBER_OF_PROCESSORS environment variable.  make -j $(NUMBER_OF_PROCESSORS) will do the trick (from within the makefile spawning a recursive make). 

Just as with adding multithreading to existing source code, don’t expect your existing makefile logic to work on the first try after adding -j concurrency to your build.  Sequential make can hide implicit dependencies that will break when everything happens at once.

For example: if you use precompiled headers, you know that you should build the precompiled header target before you build any source files that use it.  For a sequential make, it’s enough to just put your precompiled header build instruction in the makefile in front of your source code build instruction.  For parallel make, though, that will simply start the precompiled header compile in one CPU core and spin up a source compile in another core, and they’ll soon collide. 

For parallel make, you need to be more explicit about dependencies and flows than seqential make will let you get by with – you need to tell make that the precompiled header output is a prerequisite of all source files, for example. Make is smart enough to know that it can’t generate A and B concurrently if B depends upon the output of A.

Besides the .PCH, we discovered we had been lax is specifying the dependencies for some of our .RES resource files.  We had an IDL producing a .TLB, and a .RC including the .TLB to produce a .RES.  This worked fine for ages as a sequential process, but ran into trouble when parallel make tried to start compiling the .RC before MIDL was finished generating the TLB from the IDL.  (we have now exhausted our TLA quota)  Fixed by listing the TLB as a prerequisite of the .RES, which it should have been all along anyway.

Be careful about processes that write to shared files.  If A appends a message to log.txt and B appends a message to log.txt, A and B are likely to collide when executed in parallel unless they are careful to use appropriate file locking.

An example of that is compiler error output.  If you’ve got a syntax error in a header file used by A and B, then the compile processes for A and B will both fail and both report the syntax error to stderr.  More often than not, you’ll see the text output of compile process A interlaced letter by letter with B.  When each process is reporting the same error, it looks quite comical – your compiler has acquired a stutter, or perhaps is shivering with cold!

To get coherent error messages, temporarily set your jobs count to one.  Shortcut:  “make NUMBER_OF_PROCESSORS=1″.  Command line arguments take precedence over environment variables.

I’ve heard nebulous stories that msvc++ doesn’t work with parallel make – that outputting debug symbols to a shared .PDB file will result in a corrupted .PDB.  I’ve looked for evidence of such corruption, but after a month of building a project with 10MBs of PDB symbol info I have yet to find any PDB corruption. Consider that a myth busted.

  9 Responses to “Parallel Make in Win32”

  1. Out of curiosity, what do you have against Cygwin?

    I, for one, couldn’t bear Windows without it, at this point… Having nice shell job control, combined with a nice lightweight terminal in rxvt, and all the Unix utilities working with pretty much all POSIX semantics makes Windows much more usable for everyday & programming usage.

  2. I wouldn’t be surprised if the strange behavior of parallel make on Windows is due to the feature being developed by a rabidly anti-Microsoft programmer somewhere who has no Windows machine on which to test.

    And I second the vote for Cygwin. I just tried running `make -j` in a Cygwin bash shell — no complaints. I made sure to `unset SHELL` first.

  3. The only thing I have against CygWin is that it’s Unix. Or tries to look like Unix. CygWin catches the brunt of my frustrations when trying to bootstrap some library or utility that sounds like it might be useful if the damned thing would just build and run as advertised. Prolly not CygWin’s fault, but it’s implicated in the mess of having to install all that crap just to build some little chunk of code that was naievely (or deliberately) built without cross-platform source constructs. When I’ve figured out what I need, I uninstall CygWin and resume my regularly scheduled programming. ;>

  4. @bkarwin: Hi Bill!

    Agree with your first point. I dodged writing that to avoid sounding rabidly anti-Unix. Unix is fine. It’s some of the irrational users I have issues with. ;>

    On your second point: ya, ok. For me personally, I’m more productive with the Windows shell.

    See you Friday at Daev’s!

  5. I guess if everyone on your team moves over to Mac OS then there will be no need to fiddle around with DOS make. My 8 core MacPro screams at compiling — especially since “xcodebuild” has it built in to use muli-core.


  6. Hi Corbin!

    We’ve already got more than our fair share of Macs running around here, thanks. If we were any more hip and trendy we’d be a fashion show. :P

  7. That’s pretty useful Danny.

    I have one question though. The OS that’s used in my work is Linux (Fedora Core 5 to be exact) and I’m trying to get the *.idl files to compile while using the -j flag. I’m having a problem when you run make -j and the idl compiler compiles the *.idl files (that works just fine), but right after that, it tries to run the g++ compiler and if certain files are not created before this command is run, then you get errors saying that there are missing files. I’m using autotools in order to compute my dependencies when it comes to the *.cpp files and *.h files (so that part is fine), but make doesn’t seem to recognize that it needs to compile first the *.idl files before it proceeds with the C++ files. And I’m not sure how to make it realize this.

    If you can, could you please post your or Makefile?

  8. I’m no longer at Cooliris, so I don’t have the makefiles at hand to answer your specific question.

    There are techniques you can use within make to ensure completion of one set of operations before the next set begins. I’ve written up a blog post with a little more detail here:

  9. […] reader responded to my March article on Parallel Make in Win32 with a question: I’m trying to get the *.idl files to compile while using the -j flag. I’m […]

Sorry, the comment form is closed at this time.