It's not just a matter of dynamicness, it's also about less rewriting of the same crap. When you write everything out procedurally, you often end up reimplementing the same things because they aren't as easy to interchange... Which leads to lots of copy&paste code and bloat. Streams allow you to abstract away some of your reoccurring patterns.
If I were to use "map(add1, filter(is_even, fibs()))" what's really going on is this...
map says "pull a value from filter and do add1(i)"
filter says "pull a value from fibs() and return it if is_even(i)"
fibs simple spits out values when asked
This is repeated until "map" isn't able to pull a value, in this case, it's infinite, but if I said:
take(20, map(add1, filter(is_even, fibs())))
Then the "take" is what starts the "pulling" and it will stop after pulling 20 values through. It doesn't generate all of the fibonacci numbers necessary, it only pulls them out when needed as it goes. If you would be more satisfied, you could write a even_fibs generator instead of doing the filter so you can use your optimization... But the rest is exactly the same thing that would happen in an iteration, just more modular.
This is under the assumption that you can't just write an "every_third_fib" generator You can use those optimizations when needed, but piece it all together as a stream of generators.
Not necessarily...
In this case, with just streams, so it's not as interesting, it's just another way of abstractly handling the iterations. But the map/filter/reduce concept is VERY powerful when it comes to reducing complexity and especially paralleling the execution.
When I do "map(f1, filter(f2, items))" there's nothing holding this back from doing the filter test on each item in parallel, likewise with the map (after the filtered objects have been collected). Scalability then starts to come naturally. This is a powerful edge over constantly thinking in terms of iteration (where you're explicitly expressing the problem in a sequential nature), I don't care what order they happen in... Do it all at once, then collect them... Do them backwards... Whatever...
Then you end up having to code those parts into the loops, and doing the conditions each loop. It's not as easy to write modularly where you can interchange pieces and reuse everything. It's not just about runtime dynamicness, it's about reusable, modular, scalable ways of dealing with large sets of data.
Bookmarks