I doubt that few reading this will have much of a clue what I’m about to talk about, and fewer more may care, but it’s a lot of fun for me. It’s a story about learning, tinkering, and forever evolving. It really all started about 25 years ago when I built a couple web pages back in the early days of the Internet, and has culminated in me more or less having a small farm of servers running in my 676sqft condo. Along the way I’ve learned a LOT of things, and the beauty (or, brutality) of it all is that I have done it all on my own. Self taught. I’ve fallen more than a few times, but with failure comes an opportunity to learn.
Around 1997 as the Internet was only a shell of what it is today, and we all “dialed up”, annoying our parents for hours on end, I built multiple different webpages at places like Geocities and Angelfire. I had a guestbook. I had a chat room. It was wild stuff. To be able to do this was considered sorcery at the time, but I absolutely loved it. I read every tutorial on HTML & Javascript to make text move, and fun things like that. But, I wanted a guestbook and a chatroom, which meant that I needed to use server side scripting. What you see on webpages is “client side” scripting, meaning that the code runs on your computer, and it’s sent there from the webserver. “Server side” scripting is stuff that happens on the server. It’s the databases, it’s the analytics, it’s where things are stored. Only, server side scripting wasn’t something that came with these free website places like Geocities, so I needed to find another place to put my stuff – and, I did. You’re going to start seeing a pattern here.
This was all well and good for a while, until I wanted a place to host my DJ Mixes online – so, off I went and learned how to use domain names, DNS servers, and actually pay for “hosting”. Those free places are super limited, but if you buy your own “hosting”, you can do almost anything. These companies take a powerful computers, and then put hundreds (maybe even thousands) of “clients” on them. Each client feels like they are on their own server, but they’re just on a “slice”. The thing is, that stuff costed about $150/year for a basic plan. Soon after a lot of my friends wanted to host their stuff with me and I came up with an idea. What if I built a site and charged people $10/year to host some DJ mixes there, and built an interface so they all had their own little “slice”? As such, I created deejay-mixes.com (long defunct) and I was actually making about $50 a year at one point, while also getting my own hosting paid for ontop of it. If I’d only kept on that idea and built it up more, it may have been the mixcloud.com of today. Who’s to know – the idea was easily 15 years before Mixcloud came to be.
Time goes on and I began to write a lot more of my own code and building some pretty cool shit. Then, I thought I wanted to have a “storage server” at home. So, I built one. It cost like $2000 at the time, and I had a TON of redundant storage in there, running Ubuntu Linux 18.04 LTS. I toyed with things like FreeNas and some others, but settled on a custom built solution. It worked for about 2 years until something went wrong and I lost all my data. That’s a whole other story, but it spurred on a massive leap in what I was doing with my digital life.
I’ve lost all my data. All my pictures, movies, music, and some highly valuable documents. I’ve managed to get a lot of it back piecing it together from other places, but a LOT is still missing (although, not 100% gone, I still have the failed server, I just haven’t been able to justify spending $3000 on recovery of stuff I’ve lived without for a decade). On that note, I decided to build a better machine. I wasn’t going to be fooled twice.
I invested in a proper NAS (like a file server). It was a small Synology device, and it was a breeze to setup. I’d stepped up my game in a big way as now I had better data storage and this thing (if it failed) would be easy to recover. Once I had it ticking the way I wanted, I began to see what else I could do.
I discovered PLEX, which everyone should have (imo). I built that library out, and then got into something called Sickrage & Couch Potato. They were programs that automatically downloaded movies and TV shows once they aired off various places on the Internet. Holy shit, this thing is amazing. I have my shows within hours of them airing, and I don’t have to pay for cable! (This was before you had all those free sites to watch things with 1000 ads popping up every second). Time went on, and I began to outgrow the system I’d built.
I went out and got a bigger NAS device, using my old one for a second backup source. Now I had a (near) bulletproof system to never lose data ever again. I’m SO happy, and, my data is too! I continue to see what can be done with these systems and really begin pushing even the higher end systems to their limits. I need to make a change.. again.
The what? OK. Docker is kind of like a program that runs on your computer that allows you to run a whole bunch of other programs inside of it. Now, instead of your computer, it’s a little server that sits under your couch, or in a closet. It doesn’t have a monitor. All it does is run some programs for you. You can run all sort of things, that have wildly different needs. I was able to do this on my storage server, but it’s not meant to do this type of thing – so I let it be what it’s good at. Storing stuff.
I built myself an Ubuntu server running Docker, with a Portainer frontend, and a whole ton of containers. A container is like a “program”. I had all kinds of things going, like a media server to stream my movies and TV, programs to automatically grab Movies/TV/Music, and a couple others. Things are once again calm, and I’m in a new place to learn.
Now, at this point in my life I began to realize I didn’t like “the cloud” all that much. I didn’t like the idea of someone else having my data, and having the ability to just “suspend” my account, or worse yet – just simply go out of business unannounced and overnight, the whole thing is gone – along with millions of other people’s stuff. Not happening to me.
So, I began to do something called “self host”. This means you bring everything in and under your own control. It’s got a lot of benefits, but it also has it’s drawbacks. For one, there’s no “tech support”. You’re it. There’s no help, and nobody to call if you don’t know what to do or if you break something. You have to learn a lot of code, you need to know how things are put together, and you need to maintain it. You need to apply security patches, you need to ensure malicious actors can’t get in, and you need to be up to date. This is no small job, and you need to do it constantly. But, it’s wildly cheaper, it offers an insane amount of flexibility, and it’s just fucking cool.
I built things like recipe books, task sorting applications, AI driven photobooks of my pictures, and a whole host of other things. At last count, there were 41 containers running – at my house alone – many more on a variety of other servers out there I have going.
This was all well and good, but I was back to an age old problem. Backups.
I was scared shitless that one of these things would die and I’d be in a pinch. The beautiful thing is that it’s easy to rebuild as my “main data” was stored on redundant backed up storage, but these servers and their configurations were not. A rebuild of one of these would take a couple days, and I’d also lose all my configurations & settings – which would take weeks, if not months to rebuild to where they were – most of which were customized movie & TV settings, and making my library proper. I couldn’t stomach that idea, so I got to work.
After testing lot of different ideas, I settled on backing up my configurations and “unique” files, which would sync from these servers back to dedicated places on my redundant storage arrays. This way, if anything was to happen, I’d just rebuild the machine in a night, and then sync the files overnight, and it should more or less be back to where it was by morning. I did this across a modest network of these docker servers I had out there. I felt good, but the theory hadn’t REALLY been put to the test in a real life failure. I still felt nervous as I didn’t _really_ know I was protected without putting in a TON of work for recovery. I had to learn, again.
When you think “there must be a better way”, there almost certainly is – the problem is, you just don’t know what you don’t know. That’s been the downfall of more than a couple people in history, myself certainly not excluded. This time I really researched the best solution for me. I could have gone with far more extensive ideas, and maybe one day I’ll be there, but this was the solution that made the most sense and didn’t blow up the budget – in fact, it cost me nearly nothing.
I had an old server laying around that was a powerhouse. Intel i7-8700, 32GB RAM, 2TB NVME drive. It has 12 cores, and is a rock solid processor capable of more than I need it for. Now, what is this thing going to do? Well, I’ll turn it into a “virtual server”. This means that while it’s one physical computer, inside of it I’m running a bunch of “virtual” computers. They all take a little slice of the “main” computer’s pooled resources for all the “virtual” ones. And, inside of each of those “virtual machines”, I can run whatever I want.
Remember how I had that Docker server? Think of me putting that Docker server inside of another server. It’s like another layer around it – actually – that’s exactly what it is. And, this allows me a lot of flexibility.
I’m using a “hypervisor” called “Proxmox”, which is a free & open source program – something I strongly believe in.
This means that while a company technically “owns” it, they don’t own you. Someone owns everything at the end of the day, but with software, it’s kind of different. When something is free & open source, it (generally) means that anyone can contribute to the project. Everyone has the source code, and they can go off and build their own version of it and run it, and do what they want with it (besides sell it). However, it’s not lucrative to do that as the real benefit is in the people of the world putting their collective minds together to make the source project the best it can be.
And, because of this, these things are WILDLY secure. It means that holes and problems are solved quickly. Companies who build software for profit are less inclined to try and find problems they don’t know exist, but rather build new features to sell more of it. There are still a lot of benefits in going with a corporate solution (heck, I work for a corporate software company!), and I still suggest that in a lot of places, however open source & free software should absolutely be used when the time is right. You need to determine that for yourself – it’s a lot like “do I want to pay Google to do this, and they’ll take care of everything for me, or do I want to learn how to do it on my own, and have no support, but it’ll be a lot cooler and more customizable, and… well… free”.
Proxmox is going just great. I quickly virtualize the server that ran my phone system. That took about 45 minutes, which included the 10 minutes I waited for the licensing to transfer. Next I did a test run moving my Home Automation server over. It took about the same 45 mins, and went perfect – but I’ve yet to “throw the switch” over to this new one. When I do, it’ll take about 10 minutes. I just killed two whole other computers that now don’t have to run in my home! Amazing.
Now, the big lift. I need to move my Docker server into this thing. I could have tried to do an image and then more or less move the whole thing in one big fall swoop, but I thought this would be a good chance to clean up a couple loose ends and rebuild using my backup system I had in place. Would it work? I was about to find out.
I spun up a new virtual machine, installed Docker, and got the basics going. That took no time, and I began to consider my migration plan of attack. I decided to just go for it based on the documentation I’d written, and my original restoration plan just to see how good I did. After all, no good backup plan is worthwhile if it can’t be restored. The thing was, if this didn’t work, I had other options.
I turned up new copies of a couple of the containers and they came up with the stock settings as expected. Simple stuff, but I want MY settings on there. I’d already setup a network of file syncing across my servers, so all I really needed to do was add this new server to the mix and it’d sync things over to it. And, it worked. The files began transferring within minutes, mind you, it took about an hour to pull them all across.
I stuffed the “synced” files in a different place, with plans to move things over bit by bit. This process was going to take a bit longer, but that was by design so I could do this gradually and methodically as opposed to a “one big restore”.
As I began to restore the settings for the 15-ish containers I moved over last night, it was instant. I was able to shut down my old ones, and my migration is about 1/3 complete with a very clear and easy path forward in sight. I was going to complete the rest tonight, but I decided to write this instead.
There will be another 1/3rd moved tomorrow, and then the remaining 1/3rd I’m considering if I want to keep as I don’t use them all that much, and I think there may be better ways to do what I was doing – if I need them at all. This is such a low priority for me right now, so I’ll come back to it one day in the future.
It’s downright a joke how simplistic my backup system is now. If anything dies, the longest I’d have to wait would be an hour or two before it was fixed. Worst case scenario and hardware dies, it’d only be until I could replace that hardware, but nothing would be lost virtually.
Before I was backing up files, and having to do manual restores. Now, I’m backing up entire machines in one quick and fall swoop. An entire machine within my virtual environment backs up in about 30 seconds. That backup is stored on my redundant storage, which then also backs up to another redundant storage array, and just for super safe keeping, backs up again to a server somewhere in Germany. There’s no less than 3 layers of data storage here, 2 of which are off-site, in completely different areas of the world. And this is all automated, and happens every night between 2am and 7am, in an intentionally staggered manner.
Should anything go wrong, I just pull a backup file, load it, and I’m running just as I was within probably 15 minutes. That’s insane. Did I screw something up? Oh well, worst I’ll lose will be 24hrs worth of work, and I can rebuild in minutes. My entire house could burn down, along with all my servers, and all I’d need to do is buy new computers, load a basic operating system, and I’d be back running in a couple hours of getting those computers home – with no data loss. Wild, isn’t it.
That’s really yet to be seen. I know there will be a next, but I’ll need to judge my needs, what is realistic, and what makes the most sense. The idea right now is to consolidate and get things running as smooth as humanly possible, with less focus on “let’s do a lot of things”, but more “let’s do a couple things really damn well”.
Maybe one day when I have a house and can stick a noisy server in a basement out of audible range, I’ll rack up some stuff, but that’s not reality right now. Plus, I’ll need solar to not go broke on electricity charges. I have a couple plans for better consolidation in the coming months. Once that’s done, I’ll likely turn my attention back to my home automation system. Now that I have that system on much more robust hardware, I can begin to explore more.
I have some presence sensors which are super rad. They can tell you not just that there’s someone there, but WHERE they are in a room, and they’ll detect up to 5 different people at a time. While it won’t tell you WHO is there, it’ll detect a body, and it’ll also detect a body that barely moves. This means that when I’m sitting on the couch, or at the computer, it’ll know I’m there, not “I see motion” and then once I’m fairly still, it’ll no longer detect motion. The opportunities this opens up are insane. When I move from the computer to the kitchen at night, it can turn on soft lighting so I can get a drink while not being in the dark, but also follow me from room to room turning things on and off as I come and go without extended wait periods. Frankly, if I had a set of stairs, I’d put soft lighting under them and they’d light up and turn off as they followed me up the stairs lighting the way at night. My mind has a million things on it for this.
Some people say “if it’s not broken, don’t fix it”. That’s nice and all, but that’s not how you grow, evolve, or get better. And, if I went with that attitude, I would still be sitting with a little file server, with no real backup plan, and would have never learned the things I did in the past 10 years.
I would also argue that I would not be in the place I am professionally in my career if I hadn’t have taken on these projects. The amount I’ve learned is astronomical. It’s helped me in a lot of places in life. I see problems differently. I have better knowledge into how things are put together, which helps me find solutions more efficiently. I’m able to talk at a different level about complex matters, with more confidence. And, well.. I got to build some really cool shit along the way – which is really what I’ve been passionate about my entire life.
- So, I Guess I Quit DJing. 193 views | posted on March 9, 2022
- Let’s Make Some Stock 176 views | posted on April 11, 2024
- I Just Spent $160 On A Toothbrush 126 views | posted on August 8, 2023
- The Future Of NHL Hockey Broadcasts 123 views | posted on May 18, 2024
- Chicken with Mushroom Cream Sauce, Sprouts & Parsnips 112 views | posted on October 7, 2021
© Paul Hattlmann. All rights reserved.
Powered by pasha.solutions - Bless OpenSource
All opinions, posts, comments, & content are solely that of my own. They in no way, implied or otherwise, represent the views or opinions of any business, corporation, or entity that I may be associated with.
Leave a Reply