Recently, I’ve come across some C# code deadlocking quite reproducibly while executing some tasks using Parallel.For
method. The seemingly innocuous code lead to an “obscure situation” exactly as described in this blog post by Stephen Toub:
Does Parallel.For use one Task per iteration?
…iterations are handed out in indivisible chunks, and only one thread is involved in the processing of a particular chunk. This has implications for interdependencies between iterations. If iteration
i
blocks waiting for iterationi + 1
to be completed, and iterationsi
andi + 1
are both allocated to the same chunk, the loop will likely deadlock. The thread processing those iterations will block processing iterationi
, but as that thread is also responsible for processing iterationi + 1
, iterationi + 1
will never get processed and iterationi
will never unblock.
The problem in this case was exactly as described above: there were blocking wait dependencies between the tasks in a producer/consumer pattern. The solution (once the problem was clear) was relatively simple: don’t rely on the default partitioning of Parallel.For
; provide your own to avoid the potential deadlock.
A good framework or library can provide you with a good-enough solution in a large-enough percentage of possible use cases (probably making some compromises to achieve that goal). Don’t expect a pre-fabricated solution to solve all your problems out of the box; There is no silver bullet.
Here’s some interesting reading about the Parallel.For
implementation in .NET and trade-offs between simplicity, overheads, and load balancing:
Patterns of Parallel Programming (Understanding and Applying Parallel Patterns with the .NET Framework and Visual C#)