As a final, from my perspective anyhow, follow up to this topic;
TLDR; The data flow model and parallel processing in general is currently a poor fit for roguelike games.
Roguelike 2D field of view calculations on modern hardware are so cheap that you can get away with just about anything unless you're targeting androids or iphones. I finally realized how silly it was when my benchmarks showed < 250 *micro* seconds
Similar results are seen with any decent pathfinding implementation.
The larger question for me, was data flow programming's applicability to roguelikes, and indeed the entire parallel processing subject where roguelikes are concerned. For fine grained parallelism, unless you're planning to run your roguelike on CUDA cores, you will see a *massive* slowdown. A rule of thumb breakpoint seems to be around 100K instructions, less than that and the cost of synchronization and context swapping will cost more than the calculations. Above that you start seeing some improvements but until you get significantly higher its not, in my opinion, worth the hassle and added bugs.
If your game encounters performance issues, it may be worth the effort in larger games to separate the model and view into their own threads/processes unless you have a lot of crosstalk between them. From what I'm seeing in tests, the only other scenario where it may make sense is if you use a client server model with each mob being a pseudo client. This is basically the Actor model taken to a process rather than concurrent threading model. Of course, concurrency is not parallelism, so you may find uses for concurrency even where parallel gains no benefit or even incurs extra costs.
PS: For some reason I am now tempted to write a massively parallel roguelike using CUDA that runs on your graphics card and displays in a telnet terminal