I have home docker bitwarden_rs project. It's really neat to play with. Like many web applications it stores its data within a backend database. I'm not sophisticated enough at this moment to run a K8 cluster, but I am capable of running a docker swarm where I can have two nodes for the actual application layer running within two separate VM's located on different machines but all within the same LAN space. In terms of High Availability and Failover protection, the only part of my design I haven't been able to "replicate" successfully is probably the most important part of the project -- the database backend. Currently I have a postgresql docker instance with bind mounted storage running on the master node. If the database somehow becomes corrupted, destroyed, etc -- the data is a total loss.
Although there are options for the project to have a mariadb or sqlite backend, I've been doing some reading on database replication and it seems specifically postgresql was designed with this feature set in mind. I ran across this article discussing simplified examples of configuring replicated postgresql databases with asynchronous/synchronous backup https://luppeng.wordpress.com/2019/0...ity-in-docker/. Before I started down yet another rabbit hole I was wondering if any of the IT pros had any experience in general in the area of database replication/fail-over. I'm aware there are many commercial products that provide software to accomplish this, however I was looking for something more appropriate for a home lab. Others I've talked to admit the difficulty with working with databases and have defaulted to using something like an Amazon database so they don't have to worry about managing backups etc. Others have suggested I'm making it too hard and I should take a simplified approach of dumping the data on set intervals (like nightly for my small project) and keep "x" numbers of backups in case a restore is necessary. Just wondering what kind of approach has been tried and works well.
Bookmarks