Filesystem Data Loss Lessons Learned
Today I learned a big lesson, and in a way, I got off easy. I was SSHed into one of my development boxes on Amazon EC2. I am currently in the middle of building a few client sites and they are all staged on one box. Now, I use EBS-backed EC2 instances, which means I could easily set up a nice, regularly scheduled snapshot like I do for production machines, that would enable a point in time recovery of the entire filesystem in case something stupid happens; but instead, I foolishly have been relying on the fact that I regularly push up any changes I’m making to my Github repos. Big mistake.
Turns out, I had some work from the past week or so that wasn’t checked in or pushed up. I started a new client project today and during the process of setting that up, deleted an entire git repository rm -rf
. OUCH. OK, so I panicked. I couldn’t unmount the EBS volume because it was mounted as root, so I quickly logged off the box, shut it down, detached the volume and attached it to another EC2 instance. Then I installed extundelete
and ran it on the volume, but it was too late. Code I wrote over a week ago is gone forever. I am fortunate that so far, it appears only minor tweaks I made to the site were effected, but nevertheless, I’m going to lose a lot of time going through my notes and making sure every t is crossed and i dotted.
So this isn’t so much a WordPress snippet, but it is a tip: have multiple levels of backups, and don’t have them rely on or be subject to your own behavior. I’m spending all day tomorrow making sure that’s the case with my development environment as well as production systems. Sheesh.
No comments yet.