Hi
Originally Posted by
cris7
Not sure if your example or mine really make a difference, I'm still pretty new at this.
One difference is that the pipeline is slightly slower as it forks more processes and this is expensive. This can make a real difference on large datasets and depending on how you code your scripts.
For one file
Code:
matthew-laptop:/home/matthew:3 % ls *.dbf
test.dbf
matthew-laptop:/home/matthew:3
Code:
matthew-laptop:/home/matthew:3 % time ( for f in *.dbf; do printf "~~~%s\n" "$f"; done > filename )
( for f in *.dbf; do; printf "~~~%s\n" "$f"; done > filename; ) 0.00s user 0.00s system 77% cpu 0.002 total
matthew-laptop:/home/matthew:3 %
Code:
time (ls *.dbf | tee runfile | awk '$0="~~~"$0' > filename)
( ls -F --color=auto *.dbf | tee runfile | awk '$0="~~~"$0' > filename; ) 0.00s user 0.01s system 94% cpu 0.007 total
matthew-laptop:/home/matthew:3 %
However in this case the difference is so small (at least for one file), we'll ignore it
Obviously one would want an averaged time to get more accurate results.
Kind regards
Bookmarks