Recently, I’ve come across some C# code deadlocking quite reproducibly while executing some tasks using
Parallel.For method. The seemingly innocuous code lead to an “obscure situation” exactly as described in this blog post by Stephen Toub:
Does Parallel.For use one Task per iteration?
…iterations are handed out in indivisible chunks, and only one thread is involved in the processing of a particular chunk. This has implications for interdependencies between iterations. If iteration
iblocks waiting for iteration
i + 1to be completed, and iterations
i + 1are both allocated to the same chunk, the loop will likely deadlock. The thread processing those iterations will block processing iteration
i, but as that thread is also responsible for processing iteration
i + 1, iteration
i + 1will never get processed and iteration
iwill never unblock.
The problem in this case was exactly as described above: there were blocking wait dependencies between the tasks in a producer/consumer pattern. The solution (once the problem was clear) was relatively simple: don’t rely on the default partitioning of
Parallel.For; provide your own to avoid the potential deadlock.
A good framework or library can provide you with a good-enough solution in a large-enough percentage of possible use cases (probably making some compromises to achieve that goal). Don’t expect a pre-fabricated solution to solve all your problems out of the box; There is no silver bullet.
Here’s some interesting reading about the
Parallel.For implementation in .NET and trade-offs between simplicity, overheads, and load balancing:
Patterns of Parallel Programming (Understanding and Applying Parallel Patterns with the .NET Framework and Visual C#)