Hello! I figured I would follow this up with what came of my solution search and what we have currently in production We are averaging 1.2m uniques/month and utilize the majority of a 6mb pipe with burst option.
While I was looking for the best of both worlds, it would seem the only easy/free solutions are either live/failover or replication servers, where I was wanting a fully redundant live master/master configuration throughout our entire server system. Reality comes in when you have to start making sacrifices for flexibility or easy of use (I know some hard code *nix monkeys are going to yell at me for that one!).
Old server specs (1 unit):
- 1 x HP DL380 G5 : Dual Xeon 4-core, 8 gig ram, 6 * Cheetah 74gb SAS1 in RAID5, CentOS 5
First hardware specs (all on a gigabit network):
- 2 x SUPERMICRO SYS-5015A-EHF-D525 : Dual Core Atom based 1U servers with 4gb DDR3 memory and a 16gb SSD SLC HDD. Low power, all flash based load balancers. Mobo's feature Intel 82574L LANs!
- 2 x SUPERMICRO AS-2022G-URF : Dual Opteron 6100's (8-core), 16gb DDR3 ECC REG memory, 8 * Seagate Cheetah 15K.7 ST3300657SS 300GB HDDs hooked to a LSI MegaRaid 9261-8i 6Gb/s SAS2 controller /w BBU in RAID10 (lots of DB read/writes), dual 80+ PSUs
- Tripp Lite SMART3000RM2U UPS
- Netgear gigabit switch all shielded CAT6 (overkill, but ya do it right the first time right!?)
Load balancers are configured but not in production or tested with SSL yet, so I will not talk about those at this time. I will update this thread as time and projects permit.
Solution settled upon was using the latest Proxmox Virtual Environment manager. Installation was simple enough. We did split our raid array (1.2tb) into two visible drives. The 0,0 was set to 100gb for OS, subsystems and logs. 0,1 was set to the remaining free space (1.1tb) for the VM storage setup in DRBD for replication.
We installed Proxmox from the 'bare-metal' ISO using a Master/Slave node configuration on drive 0,0 and then followed the Proxmox DRBD manual to configure and setup the DRBD drive share on 0,1. DRBD sync is done over a server-to-server gigabit connection bypassing the network. Needless to say, 1.1tb block sync. took for eeeeeeveeeeeerrrr...
Once this was done, we were up and going with our Proxmox Master/Slave setup. Using the DRBD subsystem we are able to utilize the single 1.1tb 'network-raided' drive for VMs. This means that if either server goes down, we just simply restart that image on the other node. As Steve Jobs would say, "It's magical!"
With this setup, I have basically unlimited options for multiple load balanced VMs including collaborative environment tool servers, Postfix servers, basic LAMP servers and the list goes on...
First thing we did was make two DB servers (one on each node) and then we have those doing Master/Slave synchronization. We then load balanced those servers by writing our PHP DB wrappers to read from the local database of that node but only write to the Master node. This was the preferred method as most of our traffic is table reads rather than writes (about 10:1). The result as you can imagine has been great and the db master server averages <1.5, the slave <1.0 where as our old server installation would freeze up CPU threads often as the single DB was local to that single LAMP installation.
While we are developing our new source API for our sites and designing them with the load balancing in mind, we have temporarily setup a mirror VM of our old server with the exception of installing Ubuntu Server x64 10.04 from scratch instead of CentOS 5 *shuder*... That single VM has been allocated 2 CPU's with 8 cores each and 8gb ram... While this is a mirror of our sites from our old server, our load on this VM alone stays an average of 0.8-1.5 with spikes up to 3.5 at at our busy peaks. This is compared to the old server that averaged 4.5-5.6 at all times. The database separation has obviously made a huge difference.
With the transition to our new serve, our Proxmox servers averaging <1.8. We are going to be doing a large email campaign here soon and we have multiple email servers setup for that. We will see how emailing newsletters etc to our millions of users affects bandwidth and server load. Our previous single server setup would basically crawl to a halt during these times. We would be able to get out around 300-400k emails a day on off peak hours. I suspect our new monsters will have no problem munching through them and I will post some numbers once those get going.
I intend to update this with my load balancing endeavors as soon as I can. For now, back to work!
ORIGINAL POST
------------------------------------------------------
Hello all!
I am about to embark on building a small cluster with high-availability load balancers and servers that will server millions of users a month (as in we already have this much traffic and are expecting more as we grow this year) and will be supporting SSL transactions. I would like to know if any of you have done this and have any suggestions or experiences I could learn from before hitting those road blocks?
Our current plan of attack as from what we've gathered to be the best option from the various users/sites on the inter-tubes is two identical boxes running LVS + HAProxy. These would then hand off to two identical servers with LAMP stacks running with Linux-HA Heartbeat and Pacemaker.
There have been other various suggestions, but this seems like the most tightly developed grouping of packages (from what I am gathering around the web).
I haven't been able to find a complete guide to all of this; just parts and pieces. Does anybody know of a full guide that one can find to make sure they are not nuking their projects? I plan to write a full guide/journal when I have completed my journey to help others and use as a back-reference.
Any insight or experience is very helpful and appreciated. Thanks!
Bookmarks