As you know CentOS 5 е EOL (End-Of-Life) from March 31st 2017. Which leads to the following very interesting problem:

# yum update
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
 Eg. Invalid release/
YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
 Eg. Invalid release/
removing mirrorlist with no valid mirrors: /var/cache/yum/extras/mirrorlist.txt
Error: Cannot find a valid baseurl for repo: extras

The problem is short, that lists the CentOS mirrors 5 they are already lost and when we try to take content directly we get the following refusal:

# curl ''
Invalid release

In general, the most sensible idea is to reinstall the tin with a normal distribution., which supports a working distribution upgrade. Unfortunately, this is not the case with me, and this is not an option on the table at all. So we had to play a bit of a gypsy scheme – we start using Vault mirror. In a moment of perfectly clear creature and common sense I know, that I will not receive, any updates that are not the purpose of the exercise, а искаме просто да има работещ yum с, който да инсталирам пакет, който ми е необходим. За целта за коментираме всички mirrorlist променливи и добавяме baseurl в /etc/yum.repos.d/CentOS-Base.repo. Накрая получаваме yum repo от вида на

name=CentOS-$releasever - Base

#released updates
name=CentOS-$releasever - Updates

#additional packages that may be useful
name=CentOS-$releasever - Extras

Finally we play a yum clean all && yum update. If everything ends without getting an error, then we have successfully completed the scheme and we can safely install the obsolete packages.

Mdadm is my favorite friend but there is something that annoys me terribly – periodic inspections and resink health of the RAID array- for example there is data in bad sectors, which in turn crushes the machine from to IO. In general, after picking, I found the culprits – crowns that usually start around 1pm every Sunday. The idea is clear – assurance that the array is in perfect condition and there is no drama with the information. This is good, but I see a lot every week, so I reconfigured it to run every first date of the month.

For Redhat based derivatives the path to the crown is /etc/cron.d/raid-check. For Debian based distros, the path is /etc/cron.d/mdadm. Crowns, in turn, call bash scripts /usr/sbin/raid-check for CentOS etc and /usr/share/mdadm/checkarray for Debian and friends. Parameters to the scripts are taken from /etc/sysconfig/raid-check or respectively /etc/default/mdadm where the check can be completely banned, which is not very clever as an idea.


Anyone who deals professionally with web hosting knows the threat posed by infected users with malware, web shells etc. In the general case it is used maldet a not bad script. It is distinguished by 3 things

  1. It's terribly slow
  2. It's terribly slow and if you put it in monitoring mode it will mess with your server
  3. Supports own database with md5 / hex definitions for bad code.

It is its last feature that makes it useful, because, among other things, you can submit files that have not been detected so far and will enter the databases at a later stage.. As I shared in point 1 and 2 its speed is shockingly low – at low machine load, 70k files were scanned in about an hour and a half. For this reason, I started helping my good friend ShadowX with Malmo – an alternative to maldet written in python with a little more flexibility. Unfortunately for lack of time (mainly but not only) we have not completed the project, which is not very usable at the moment – there are a lot of bugs that need to be cleared. In the past few days I had problems with clients infected with CryptoPHP who had huge public_html files ~ 60k + user inods. Since a total of over 200k files had to be scanned, which would take roughly 5+ hours I decided to tune the configuration of maldet, to reduce the files that will be scanned to a reasonable number and time. As I picked up the conf, I noticed the following lines

# Attempt to detect the presence of ClamAV clamscan binary
# and use as default scanner engine; up to four times faster
# scan performance and superior hex analysis. This option
# only uses ClamAV as the scanner engine, LMD signatures
# are still the basis for detecting threats.
# [ 0 = disabled, 1 = enabled; enabled by default ]

Interesting… Apparently there is an opportunity to use ClamAV – which also does not differ in its high speed, but why not give it a try. I quickly installed it

/scripts/update_local_rpm_versions --edit target_settings.clamav installed

/scripts/check_cpanel_rpms --fix --targets=clamav

I run the maldet on a small folder – I don't see a difference in speed and behavior – he used his perl scanner instead of the clamav. After a short digging through the source of maldet, I found the following lines

 clamscan=`which clamscan 2> /dev/null`
 if [ -f "$clamscan" ] && [ "$clamav_scan" == "1" ]; then
        eout "{scan} found ClamAV clamscan binary, using as scanner engine..." 1
    for hit in `$clamscan -d $inspath/sigs/rfxn.ndb -d $inspath/sigs/rfxn.hdb $clamav_db -r --infected --no-summary -f $find_results 2> /dev/null | tr -d ':' | sed 's/.UNOFFICIAL//' | awk '{print$2":"$1}'`; do

Hmmm, I made one which clamscan and to my great surprise I found that clamav is not in the PATH at all, but the stupid Cpanel left it only in / usr / local / cpanel / 3rdparty / bin / from where he uses his binaries. A quick ln solved the problem:

ln -s /usr/local/cpanel/3rdparty/bin/clamscan /usr/bin/clamscan

When re-scanning, maldet already reports the above

{scan} found ClamAV clamscan binary, using as scanner engine...

After already using ClamAV maldet, it finishes its scan 3-4-5 times faster than before. The test showed – 70k inod-a rubbed them for about 25 min which is about 3 times and a half faster than before.

Before I start with another random hate to 50 thin OS I mean, that I have to administer it every day and I know it first hand singular quite well. Today I took the time to test the new magical unheard and unseen feature for a destructive upgrade (pseudo) 😀 . The first thing that amazed me was, that RedHat in its infinite wisdom have decided to stop supporting the x86 architecture 🙄 . Напълно съм наясно че сме 2014-та година и сървърни процесори с 32 bits instructions have been missing for a long time. Мда ама какво правят потребителите на малки VPS-и – x64 paw more frame, as well as look at it, if you have a thin virtual machine with 512MB-1GB RAM, you will fight for every megabyte of it and you will not waste 20-30% from it just to use on the big set of instructions. I messed up because I had x86 CentOS installed and pulled an x64. I immediately saw a difference in the ISOs – ~100MB при minimal 6.5. I cursed once more. I reinstalled the virtual machine and decided to see how well RedHat did their job – I exported / var and / usr to separate LVM partitions 😈 . After the installation, I updated all the packages and installed apache, php, mysql и bind – it was interesting if they would turn on the services. As a good student, I opened the CentOS guide to the update and diligently began to follow it step by step.. When I got to the point where the real upgrade started, this cougar cut me off, че имам критичен проблем 🙄 . I'm reviewing the detailed output – mdaaaa / usr can't be on a separate partition 😆 I knew, that I will not be disappointed with Jim Whitehurst and company. Except “the extreme” problem there were a lot of messages about unsigned packages, confiscation files that do not match and so on. WTF I hadn't even used 3rd repositories everything from their mirrors I took down I hadn't made any settings just simple yum install. Everything was already clear, so I unceremoniously forced the upgrade. I restarted diligently again as the script finally prompted me and everything ended because of the / usr partition. I was too lazy, that I'm trying to fix it, anyway, everything was just for scientific purposes, there is no way to upgrade a product server at the moment. I caught reinstalling my virtual machine and this time I pushed everything into it 1 title. I also learned a lesson, no updates no additional services, after installation direct upgrade. In the final step, the annoying dialogue he told me reappeared, that i have quite high problems – invalid packages, confiscations and so on but can continue. I knew in the beginning they didn't do things right. I restarted again and waited – oh what a miracle the upgrade ended successfully. And everything worked or at least the system boot and I never tried to install additional packages, but the halt command hung – huh still had to have a bug 💡 . After all this fuss, I decided to install Centos clean 7 let's see if it will growl for / boot partition in LVM – 6.5 does not allow such audacity. I started my ISO and was shocked by the installer to say the least – everything was extremely inconvenient “tidy”, completely illogical in order to be beautiful. After some struggle I succeeded with the cherished goal and aha to install and explode, that I have to take my / boot out of the LVM 👿 This is extremely not serious and annoying, if for some reason you forget to increase the size of the boot partition from 200MB and accumulate old kernels what happens.

In general, I did not expect anything and I am still disappointed with CentOS.

Колкото и да псувам RHEL и CentOS shit-а има някой неща които са им измислени доста грамотно. Например добавянето на голям брой допълнителни IP-та е доста приятна задачка. По принцип ако трябва да добавя голям брой адреси бих си разписал едно bash скриптче в което в цикъл да извършва въпросната операция че на ръка не си е работа. При Centos/RHEL хората са го измислили доста приятно range файл. В общи линии създаваме файл /etc/sysconfig/network-scripts/ifcfg-eth0-range0. Here we replace eth0 with the name network adapter if it is not eth0. Then we add the following content


as the arguments are

  • IPADDR_START – home IP address
  • IPADDR_END – final address
  • NETMASK – net mask
  • CLONENUM_START – numbering from which to start the network adapter eth0:0 in our case