Parallel processing is hard enough with quad-core chips, but the desktop supercomputers of the future will have thousands or even millions of cores, making most task schedulers obsolete. Data-flow techniques, however, promise to keep parallel processing on track by eliminating the bottlenecks of traditional scheduling techniques: R. Colin Johnson
MPI, OpenMP and OpenCL...treat each thread as an independent machine that runs for an arbitrarily length of time...Swarm's data-flow execution model (right) uses uniform-sized codelets with known control and data dependencies. SOURCE: ETI
Here is what GoParallel says about data-flow: For massively parallel processors...applications today can use a message passing interface (MPI) for internode communications and shared memory for coordinating tasks on a single node [but] all these techniques become less effective as more cores are added to a system...One promising solution: data-flow management techniques...performing dynamic scheduling that maps tasks...to processor resources in real time.
Further Reading