For me getting hacked was always something that could happen to other people, but not to me. I’m pretty computer savvy and, I thought, not very interesting for hackers. Of course I had hundreds of daily attacks on the ssh port of my server, even though on the outside ssh isn’t listening on port 22, but I have fail2ban running and locked down my root account access to rsa key only. I never saw any hacking attempt on my personal account on my server. I guess trying random usernames is one step too far for hackers and I am too unimportant to get the personal attention to try and guess my user account names.
So all was fine… I thought. Until I accidentally checked my virtual machines and noticed that one particular VM, my Connections demo environment, had an extremely high cpu load. On further inspection I noticed that it was a cron task of my db2inst1 user. At that moment it still didn’t click. I assumed something was wrong with my db2 database. As it was late and I was at my girlfriend, I went to bed with the idea to fix it the next day.
Next day I found a letter from my internet provider which was delivered the previous day, telling me that they received complaints of hacking attempts from my IP address. Still I didn’t see the link between this letter and the high cpu usage I had noticed the night before. Only when someone pointed out to me that it wasn’t db2 itself, but the “cron” process running under the db2 user that was the culprit, the quarter finally dropped (as we would say in Dutch). I quickly logged in to the machine and found in my crontab 5 lines which I hadn’t put there:
0 0 */3 * * /home/db2inst1/.bashtemp/a/upd>/dev/null 2>&1 @reboot /home/db2inst1/.bashtemp/a/upd>/dev/null 2>&1 5 8 * * 0 /home/db2inst1/.bashtemp/b/sync>/dev/null 2>&1 @reboot /home/db2inst1/.bashtemp/b/sync>/dev/null 2>&1 0 0 */3 * * /tmp/.X19-unix/.rsync/c/aptitude>/dev/null 2>&1
That also explains why a reboot the night before hadn’t helped. On reboot, the process would immediately be triggered again. Being my curious self, I quickly killed the process and then went on to investigate what the scripts exactly did. The script would try to stop any nginx webserver it could find and a couple of other processes like ecryptfs, rsync, sync, perl, pool and xmr. Next it would run a script where the payload was base64 encoded and piped into perl. The same script would also add the public rsa key of the hacker in the ~/.ssh/authorized_keys file, so the hacker would still have access after a password change. The base64 encoded string would decode to another encoded string which perl would decode and evaluate. By swapping the eval command for a print command I could get to the actual script.
I’m not at all fluent in Perl, so it’s not that easy for me to completely understand what the script does. It starts with the IP adress of the command server: 188.8.131.52. It also seems to define an IRC channel in which commands can be posted that are executed on my server. There’s also a function to scan certain general ports (21, 22, 23, 25, 53, 80, 110, 143, 6665) and a function to do a full port scan on computers on the internet and post results in the IRC channel. It’s probably this function that triggered the hack attempt warning from my provider.
How did it happen?
So how was the hacker able to gain access to my server? In general: my own stupidity. Specifically, I had created a demo server for Connections which I planned to also use for courses and which for this reason had very easy to guess passwords (like variations on ‘password’). As the backend wasn’t accessible from the internet anyway this wasn’t a big problem, until I needed to check some things while at work and I opened the ssh port on my firewall (not mapped to port 22, but that only slowed them down. Not stop them). My root user had a strong password, so that wasn’t the problem. There are however the default accounts for a db2 installation: db2inst1, db2fenc1 and dasusr1. All of them had the same very weak password. Apparently hacking programs don’t just try on the standard root account, but also on other standard accounts like these standard db2 accounts (all of them had failed login attempts). As this was just a demo machine, I hadn’t bothered to disable them.
What was the damage?
My server got hacked on 6 January 2020 16:32. I discovered it on 8 January in the morning, blocked the access to the server and removed the malicious scripts. So far it seems that I got lucky. Apart from the hacking attempt which was noticed by my provider and the fright of discovering I was hacked, they didn’t seem to have done any damage to the VM or any other part of my environment. They could have destroyed my db2 database, but they didn’t. Being inside my network, they could have sent spam through my mail server, but they didn’t. My luck, I guess, was that my root account at least was properly protected and the db2inst1 account had no alleviated rights on the server. They therefore didn’t have the chance to do real damage to my server. Just to be sure, I will dish the VM and recreate the Connections environment. That obviously will cost time, so that’s the damage I guess. For now, I removed the malicious code, protected all accounts, removed password login for ssh and enabled rsa key login for my root user. If I didn’t miss anything of the malicious code, that should do it for now.
This experience taught me a few valuable lessons:
- Even when you don’t expect to ever connect a new VM to the internet, build your VM like you would. Protect your accounts. Use strong passwords and for ssh enable rsa key authentication
- Hackers try standard accounts from all kind of programs to log in to a server. Not just the root user
- Hackers check the standard ports first, but they have bots scanning all available ports, so mapping your ssh server to a different port is not sufficient to keep them away from your server
- Regularly check the cpu usage of your servers. High cpu usage could be an indicator that you were hacked
- Kudos to my provider (Ziggo). I got hacked late afternoon on the 6th (according to the filedate of the hacker’s files). I got a letter on the 7th which was dispatched on the 6th, so the very same day I was hacked!
Where in my post I gave kudos to my provider for acting so swiftly, I got a nasty surprise from them the week after. Apparently the procedure of my provider is to send you a letter and then if you haven’t explicitly told them that you solved the problem, send you another letter a week later and blocking your internet access at the same time they send the letter (so a day before you actually receive the letter). They don’t inform you in any way of this procedure. No where do they tell you that you *have* to contact them to let them know you fixed the problem and they also don’t check it themselves. As a result I was without internet for a full day. They did admit their mistake and gave me some extra TV channels and HBO subscription for 3 months to make up for it, but I wouldn’t count on them not doing the same thing again. So be warned. If something like this happens to you and you solve it, let your provider know.