Dislimit
January 18th, 2007, 06:50 AM
Hi, all:
It really surprised me that Java code runs much faster (at least for not so large
program) in Windows than in Linux.
Recently I tested the speed of part of my learning package, whose task is to
parse a 10MB (about 210,000 lines) text file to extrace some figures ,discretize them and put the
result into a new file.
With the same model of i386 PC (P4 2.8G, 1GB DDR), same version of Sun Java (1.5.0_6),
same program (class files compiled by 1.5.0_10), same input data
file, both run from cmd line using default settings (additional option for
java), here is the recorded average execution time
(every time restart the program, not keep running):
OS \t Main \t Whole
Ubuntu5.10: 6s 8s
Windows XP: 1.6s 2-3s
Ubuntu(CPP): ? 3.5s
The 'main' execution time is recorded inside Java program just to evaluate the most
costly part (scan line by line, and parse numbers), so it has removed the
overhead of the startup. The 'whole' time is displayed by 'time' utility.
How can the difference be so large? I tried other Linux JRE like IBM and
JRocket with the same code, the result is no better, if not worse.
Is it because different GC strategy setting of
Linux and Windows version or something?
The last line of the table is one more thing to mention, I write an equivalent CPP version with common input stream,
which takes 4s (whole) to run, slower than my Java version (if runs on WinXP), where I use my own Scanner class
(Sun's Scanner is too slow). Does CPP computation program also run faster in Windows? I don't know.
Anyway, Ubuntu has been my main OS for a year.
It really surprised me that Java code runs much faster (at least for not so large
program) in Windows than in Linux.
Recently I tested the speed of part of my learning package, whose task is to
parse a 10MB (about 210,000 lines) text file to extrace some figures ,discretize them and put the
result into a new file.
With the same model of i386 PC (P4 2.8G, 1GB DDR), same version of Sun Java (1.5.0_6),
same program (class files compiled by 1.5.0_10), same input data
file, both run from cmd line using default settings (additional option for
java), here is the recorded average execution time
(every time restart the program, not keep running):
OS \t Main \t Whole
Ubuntu5.10: 6s 8s
Windows XP: 1.6s 2-3s
Ubuntu(CPP): ? 3.5s
The 'main' execution time is recorded inside Java program just to evaluate the most
costly part (scan line by line, and parse numbers), so it has removed the
overhead of the startup. The 'whole' time is displayed by 'time' utility.
How can the difference be so large? I tried other Linux JRE like IBM and
JRocket with the same code, the result is no better, if not worse.
Is it because different GC strategy setting of
Linux and Windows version or something?
The last line of the table is one more thing to mention, I write an equivalent CPP version with common input stream,
which takes 4s (whole) to run, slower than my Java version (if runs on WinXP), where I use my own Scanner class
(Sun's Scanner is too slow). Does CPP computation program also run faster in Windows? I don't know.
Anyway, Ubuntu has been my main OS for a year.