Originally Posted by
ssam
I am not sure that ls is any worse to parse than any other command, though globbing in the if statement could be faster.
ls doesn't necessarily show true representation of the filename and that affects later operations on the output, while directly called globs don't mangle data. Also globs are native while piping ls to get a subset of data means spawning processes and it has a cost.
Code:
$ time ( for f in *; do :; done; echo $f )
real 0m0.001s
user 0m0.010s
sys 0m0.000s
$ time ( ls | tail -n1 )
real 0m0.006s
user 0m0.030s
sys 0m0.000s
numbers may vary a bit, but the difference is visible.
In this case it may be negligible, but spawning short lived process to perform trivial task may add up to some serious time.
In general - to access set of files use native globs as often as possible. If that is not possible for whatever reason, look for the next best method that is still kosher and avoids filename problems completely (most likely using find ... -print0)
While it might look like an overkill, it will lead you to a deeper understanding how shell works.
No, parsing ls output when there are file names with \n is broken beyond repair, because file1\nfile2 in its output is indistinguishable from file1part1\nfile1part2. with globs the problem doesn't exist at all.
When you use simple
Code:
for f in *; do something with "$f"; done
you can be sure that f will be filled with proper names, even chock full of newlines, spaces, tabs and whatnot - after all it's the shell that does it, not you by manual manipulation of string representations of names.
The only thing left to do to be safe in this case is to put $f in double quotes to prevent troublesome characters like *, space or \n to mess things up.
OP, from that | tail -n1 i assume you want to access only the 'latest' file in alphabetical order? If so, the code should work. What should happen with the rest of matching files?
Bookmarks