After a power failure that got the best of the UPS, my Hardy server (running on a Dell Precision 690 FWIW) would no longer mount its root filesystem rw on boot. It wouldn't mount any other filesystems at all, either.

The curious part is:

Code:
/dev/sda1 on / type ext3 (rw)
...
/dev/sda4 on /var/lib/vmware/machines type jfs (rw,relatime,uid=1000,gid=1000)
/dev/sda4 on /var/lib/vmware/machines type jfs (rw,relatime,uid=1000,gid=1000)
/dev/sda4 on /var/lib/vmware/machines type jfs (rw)
Note how it *looks* mounted but neither is / write-enabled, nor is /var/lib/vmware/machines actually mounted despite there being three entries suggesting the opposite. df -h shows the following:

Code:
/dev/sda1             9.9G  6.6G  2.8G  71% /
...
/dev/sda4             9.9G  6.6G  2.8G  71% /var/lib/vmware/machines
/dev/sda4             9.9G  6.6G  2.8G  71% /var/lib/vmware/machines
/dev/sda4             9.9G  6.6G  2.8G  71% /var/lib/vmware/machines
Manually remounting / rw works, as does mounting any other filesystem. However this is just a temporary workaround that I can't use in production. I would like to find the root cause and fix this.

Here's what I've tried that hasn't helped:
1) e2fsck -f, came up clean
2) e2fsck -f -D (to force some changes), came up clean with changes
3) tune2fs -e continue, also changed fstab accordingly
4) changed fstab entries to device names rather than UUID

I'm at a loss as to what else to do. I've never in 12 years seen a filesystem being tainted like this. I'm open to any suggestions for things to try or other info to get.

Thanks,
Peter