I am copying about 12TB of daya from one server to another using rsync via ssh. At best I can get 9.25MiB over the link according to bmon on both sides of the link.
I issue the rsync command using nice -15 and that's what got me to 9.25 up from about 8.5 before.

I'm pushing this over a bonded pair of 1GB NICs on both servers and I can get other applicaitons to push data at much higher rates - 40 to 60 MiB depending on where on the source the file sits - I suspect most of the time I'm hitting an upper limit on the ability of the source server to read from a software RAID5 array on 5900RPM SATA disks on an eSATA link. The system load on both sides is neglible.

So, at this point, I have been copying data for about 3 days and I'm just over a 25% through.

Would I gain throughput by breaking the round-robin bond and going to a direct connect crossover between two NICs and use jumbo frames? Or is rsync via ssh just that big of drag on the transfer that it won't help?
I'm almost at the point of just trying to do NFS over a crossover with jumbo frames -- I have done that before on other hardware (completely differnet everything though) and been able to reach close to saturation point on a 1GB NIC.

What I'm doing is working, but the wait time is killing me... Any suggestions to make this go faster?