Sekerob
October 25th, 2011, 05:51 PM
Did not know how to phrase the searches so as to find an answer, so here goes:
Had a LiveCD with a -- persistent running from USB to find out if 11.10 had better performance, but it did not and just installed it over 11.04 after the upgrade caused more issues than it resolved. Originally installed the LTS 10.04 last year and it grabbed a piece of the Windows NTFS drive, which then got sliced in a 72GB ext3 system and a 3GB swap part. When going to 11.04 turned the fs into ext4. Now the question: when installing Ubuntu onto a ntfs partition, does one get the full ext3/ext4 floorwork or is there some emulation in-between? The issue is that if I run a science project (Clean Energy Project, phase 2 through BOINC distributed computing) under W7-64 on a quad I see a good 99% efficiency after a 8-12 hour run, but only 92-93 percent if booted into Linux. With efficiency I mean that the actual CPU is computing on the work and not distracted by disk IO e.g. (models up to 2GB). Both W7 and Linux run whilst there is no user input for the whole duration and when in Linux, even unload the GUI and per top, there's just parts of minutes a day on other tasks. When actually using the system, under W7 still getting a good 90-95% computing efficiency, but under Linux not getting even close to 80%, so wonder. Have played with swappiness, and fstab using parms such as noatime, relatime etc but it makes barely a difference. Even tried this zramswap-enabler script to stop the models flowing over into the on-disk VM i.e. purely run it from start/finish in ram except where the program does the mandatory progress saves to disk. From watching the System Manager graph, the VM is hardly used, often topping out at maybe 30-50MB.
What is it I'm overlooking not to get the same throughput as Windows.
TIA
--//--
P.S. Before I forget. The system has 4GB RAM, 3GB usable and a Q6600 CPU. The CEP2 program needs access to some 6700 static reference files i.e. they're only read, it's output written to 5 files that reaches maximum sizes of 30MB at most, together.
Had a LiveCD with a -- persistent running from USB to find out if 11.10 had better performance, but it did not and just installed it over 11.04 after the upgrade caused more issues than it resolved. Originally installed the LTS 10.04 last year and it grabbed a piece of the Windows NTFS drive, which then got sliced in a 72GB ext3 system and a 3GB swap part. When going to 11.04 turned the fs into ext4. Now the question: when installing Ubuntu onto a ntfs partition, does one get the full ext3/ext4 floorwork or is there some emulation in-between? The issue is that if I run a science project (Clean Energy Project, phase 2 through BOINC distributed computing) under W7-64 on a quad I see a good 99% efficiency after a 8-12 hour run, but only 92-93 percent if booted into Linux. With efficiency I mean that the actual CPU is computing on the work and not distracted by disk IO e.g. (models up to 2GB). Both W7 and Linux run whilst there is no user input for the whole duration and when in Linux, even unload the GUI and per top, there's just parts of minutes a day on other tasks. When actually using the system, under W7 still getting a good 90-95% computing efficiency, but under Linux not getting even close to 80%, so wonder. Have played with swappiness, and fstab using parms such as noatime, relatime etc but it makes barely a difference. Even tried this zramswap-enabler script to stop the models flowing over into the on-disk VM i.e. purely run it from start/finish in ram except where the program does the mandatory progress saves to disk. From watching the System Manager graph, the VM is hardly used, often topping out at maybe 30-50MB.
What is it I'm overlooking not to get the same throughput as Windows.
TIA
--//--
P.S. Before I forget. The system has 4GB RAM, 3GB usable and a Q6600 CPU. The CEP2 program needs access to some 6700 static reference files i.e. they're only read, it's output written to 5 files that reaches maximum sizes of 30MB at most, together.