Parallel programming techniques and frameworks


A quick round-up of the latest developments in parallel programming techniques, and frameworks.

First, though: why parallel programming? Isn’t that what threads (Thread, pThread, etc.) are for? Well, yes and no. The goal of many of the frameworks below is to allow the developer to “forget” about threads to an extent, and concentrate on the problem in hand, letting the parallel libraries (written by cleverer people than you or me) do all the complicated mutexing and locking.

Core CPU clock speeds have stagnated over the last few years, but processors overall have become much quicker, as manufacturers scale out (adding CPU cores) rather than up (increasing MHz). Problems with application logic which five or ten years ago may have been masked due to the presence of only a single CPU within a system may now come to light as two, four or even more CPUs jostle for the same RAM, IO lines and instruction pipeline.

Adding more CPU cores does not automatically mean substantially better performance, however. As this article [free registration] on the Task Parallel Library at DevX puts it:

An eight-core system may potentially have eight times as much CPU power, but if you don’t pitch in and help you’re likely to see only a small improvement in your application’s performance.

So, how can we take advantage of all these shiny new CPU cores to squeeze extra performance from our software without the complexity of Inter-Process Communication, thread scheduling, marshalling, and all that awkward stuff?

Although not released yet, the .Net Framework 4.0 will include the Parallel Framework Extensions(PFX) – a set of libraries with code and data structures which handle many of the more complex operations relating to multi-threaded programming. Jeff Barnes has a good video introduction on ParallelFX on Channel9, discussing the difference between data and task parallelism, the Task Parallel Library (TPL) and its use of delegates for parallel loop processing, and some cool demos for data processing.  Some of the demo examples are perhaps a bit skewed, in that they are heavily CPU-bound; in the real world, you’d probably see a greater overhead due to resource locking/contention, but it’s still worth seeing.

For those working at a lower level than a .Net (or Java) virtual machine, Intel have released Intel Parallel Studio for C++ on Windows. Not only is this a set of libraries and performance tools, but Intel provide a specialist compiler to get the most out of those multi-cores! There is a YouTube clip from GDC09 Cologne with Edmund Preiss from Intel explaining the toolset. The complete toolset helps developers to identify bottlenecks and hotspots (Parallel Amplifier), generate efficient OpenMP-compliant parallel code with Parallel Composer (there is also an STL-like mode available), and verify the execution with Parallel Advisor and Parallel Inspector. Cool stuff.

1 Comment

Leave a Comment

(required)