Because netenberg didn't live up to my expectations for fantastico at all 3 so I decided to lose it completely. Despite the correspondence we had a long time ago and the guidelines I gave them to improve their product to a competitive level of softaculous and installatron, it came to the point where their plugin had to be uninstalled from my Cpanel servers.. Since there are no instructions on how to remove this misunderstanding, I picked up a ticket for support, they gave me the following instructions.

rm -rf /var/netenberg/fantastico_de_luxe/
rm -rf /usr/local/cpanel/whostmgr/docroot/cgi/fantastico/
rm -rf /usr/local/cpanel/3rdparty/fantastico*
rm -rf /usr/local/cpanel/base/frontend/*/fantastico
rm -f /usr/local/cpanel/base/frontend/x/cells/fantastico.html
rm -f /usr/local/cpanel/whostmgr/docroot/cgi/addon_fantastico.cgi

I executed the commands to clean their files when I realized something important, the guys didn't mention at all how to register their plugin from the control panel 🙄 😆 Yeah, fagot number, but it happens and I had to watch more. There was also a chamber left their files apparently hoping to return as their client, as they had the most installation scripts as support said (don't think this is the most important thing straight) :mrgreen: . So let's continue with the complete gutting:

This is the most important step that needs to be done before deleting everything else as you will then find yourself was a plugin but with an icon in the control panel as it is still registered.

/usr/local/cpanel/bin/unregister_cpanelplugin /var/netenberg/fantastico_f3/fantastico_f3
rm -rf /usr/local/cpanel/3rdparty/fantastico_f3
rm -rf /usr/local/cpanel/base/frontend/*/fantastico_f3
rm -rf /usr/local/cpanel/bin/fantastico_f3.cpanelplugin
rm -rf /usr/local/cpanel/whostmgr/addonfeatures/fantastico_f3
rm -rf /usr/local/cpanel/whostmgr/addonsfeatures/fantastico_f3
rm -rf /usr/local/cpanel/whostmgr/docroot/addon_plugins/fantastico_f3.jpg
rm -rf /usr/local/cpanel/whostmgr/docroot/cgi/addon_fantastico_f3.php
rm -rf /usr/local/cpanel/whostmgr/docroot/cgi/fantastico_f3
rm -rf /var/cpanel/apps/fantastico_f3_cpanel.conf
rm -rf /var/cpanel/apps/fantastico_f3_whm.conf
rm -rf /var/netenberg/fantastico_f3

In case you missed the unregister_cpanelplugi step, you need to play another step:

mkdir --parents /var/netenberg/fantastico_f3
cd /var/netenberg/fantastico_f3 && curl -O
cd /var/netenberg/fantastico_f3 && tar --bzip2 --extract --file sources.tar.bz2
/usr/local/cpanel/bin/unregister_cpanelplugin  fantastico_f3
rm -rf /var/netenberg/

The veil is now fantastico out of the game and you can sleep peacefully. The reasons I replaced it with a competing product are 3 and it's very simple

  • They do not have an API with which I can communicate if I want to make additions or any other spells
  • they don't have hooks for certain actions so I can hang up and add functionality
  • naughty support quite slow and not very adequate in their answers

ps. paper_lantern and x3 had a remaining icon that is lost with

rm  /usr/local/cpanel/base/frontend/paper_lantern/dynamicui/dynamicui_fantastico_f3.conf
rm /usr/local/cpanel/base/frontend/x3/dynamicui/dynamicui_fantastico_f3.conf

Some programmers will simply never learn to write fluently in RFC. I noticed a lot of errror_log files in which a huge number of stupid warnings and notices for non-compliance with PHP standards had accumulated.. In general, it is difficult to explain to the consumer, that the code he put is naughty and needs to be fixed. In general, I've noticed that users don't care about error logs after their code works. Basically, a radical approach is to completely stop the error_log files and who wants to play them, but in general it will create discomfort for many users. That is why I am stepping up to an approach 2 – admin superpowers or 1 red bash. Search for files named error_log larger than 5MB (here I leave my value higher even though 1MB is more than enough) and deleting them weekly. The effect in question is achieved elementary with find

find /home/ -name error_log -size +5M -type f -delete

All that remains is to run into a crown to be performed once a week and we have a very persistent solution. In my case it seems ok in 1 hours every Sunday.

0 1 * * 1 find /home/ -name error_log -size +5M -type f -delete >/dev/null 2>&1

Anyone who deals professionally with web hosting knows the threat posed by infected users with malware, web shells etc. In the general case it is used maldet a not bad script. It is distinguished by 3 things

  1. It's terribly slow
  2. It's terribly slow and if you put it in monitoring mode it will mess with your server
  3. Supports own database with md5 / hex definitions for bad code.

It is its last feature that makes it useful, because, among other things, you can submit files that have not been detected so far and will enter the databases at a later stage.. As I shared in point 1 and 2 its speed is shockingly low – at low machine load, 70k files were scanned in about an hour and a half. For this reason, I started helping my good friend ShadowX with Malmo – an alternative to maldet written in python with a little more flexibility. Unfortunately for lack of time (mainly but not only) we have not completed the project, which is not very usable at the moment – there are a lot of bugs that need to be cleared. In the past few days I had problems with clients infected with CryptoPHP who had huge public_html files ~ 60k + user inods. Since a total of over 200k files had to be scanned, which would take roughly 5+ hours I decided to tune the configuration of maldet, to reduce the files that will be scanned to a reasonable number and time. As I picked up the conf, I noticed the following lines

# Attempt to detect the presence of ClamAV clamscan binary
# and use as default scanner engine; up to four times faster
# scan performance and superior hex analysis. This option
# only uses ClamAV as the scanner engine, LMD signatures
# are still the basis for detecting threats.
# [ 0 = disabled, 1 = enabled; enabled by default ]

Interesting… Apparently there is an opportunity to use ClamAV – which also does not differ in its high speed, but why not give it a try. I quickly installed it

/scripts/update_local_rpm_versions --edit target_settings.clamav installed

/scripts/check_cpanel_rpms --fix --targets=clamav

I run the maldet on a small folder – I don't see a difference in speed and behavior – he used his perl scanner instead of the clamav. After a short digging through the source of maldet, I found the following lines

 clamscan=`which clamscan 2> /dev/null`
 if [ -f "$clamscan" ] && [ "$clamav_scan" == "1" ]; then
        eout "{scan} found ClamAV clamscan binary, using as scanner engine..." 1
    for hit in `$clamscan -d $inspath/sigs/rfxn.ndb -d $inspath/sigs/rfxn.hdb $clamav_db -r --infected --no-summary -f $find_results 2> /dev/null | tr -d ':' | sed 's/.UNOFFICIAL//' | awk '{print$2":"$1}'`; do

Hmmm, I made one which clamscan and to my great surprise I found that clamav is not in the PATH at all, but the stupid Cpanel left it only in / usr / local / cpanel / 3rdparty / bin / from where he uses his binaries. A quick ln solved the problem:

ln -s /usr/local/cpanel/3rdparty/bin/clamscan /usr/bin/clamscan

When re-scanning, maldet already reports the above

{scan} found ClamAV clamscan binary, using as scanner engine...

After already using ClamAV maldet, it finishes its scan 3-4-5 times faster than before. The test showed – 70k inod-a rubbed them for about 25 min which is about 3 times and a half faster than before.

Default when you install Munin Cpanel lacks a few nice configs that we have to make by hand. За мен един от тях е мониторинга на температурата на дисковете.

In general, the configuration is trivial

1. We need to determine the type of our disks – it can be one of the following : they, scsi, sat[,auto][,N][+TYPE], usbcypress[,X], usbjmicron[,x][,N], usbsunplus, marvell, areca,NEITHER, 3ware,N, hpt,L/M/N, megaraid,N, cciss,N, auto, test. The easiest way to do this is through cat's “/proc / ide” or “/proc/scsi”. To me:

# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: WDC WD1003FBYZ-0 Rev: 01.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: WDC WD1003FBYX-0 Rev: 01.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: TOSHIBA DT01ACA1 Rev: MS2O
  Type:   Direct-Access                    ANSI  SCSI revision: 05



As you can see I have 3 ATA disk type.

2. За да почнем да следим температурата трябва да опишем в munin node дисковете ни. В файла /etc/munin/plugin-conf.d/hddtemp_smartctl добавяте записи от следният тип

# cat /etc/munin/plugin-conf.d/hddtemp_smartctl
user root
env.drives sda sdb
env.args_sda -d ata
env.args_sdb -d ata


Можем да ударим тест на нашият бъдещ конфиг по следният начин

# env drives="sda sdb sdc" args_sda="-d ata" args_sdb="-d ata" args_sdc="-d ata"  /etc/munin/plugins/hddtemp_smartctl
sda.value 32
sdb.value 33
sdc.value 33


If you get values ​​then everything is ok. If you get an error you need to check if everything is described correctly. You should restart your munin nod and wait 10-15 min to populate some data and start drawing a graph. You can check /var/log/munin/munin-node.log for errors and easy troubleshooting.

If you want to receive an email at a critical disk temperature, you must add a description for the critical temperature:

    use_node_name yes
    hddtemp_smartctl.sda.critical 55
    hddtemp_smartctl.sdb.critical 55

Today I decided to do some tests on a clean Cpanel installation for which I needed several users. Since I didn't want to burden the running servers with batch archiving and file transfer, I used the archives from the previous evening. I transferred all my archives to / home and found that Cpanel does not offer recovery anymore 1 account simultaneously through both GUI and CLI. Through GUI as there was no way to receive number I decided to outwit with cli the restorepkg script. Its use is infinitely simple

/scripts/restorepkg username.tar.gz

As the action is repeated for each user separately. When trying to use * instead of the username the script directly cut me off so it has to be approached a little more elegantly –

archives=$(ls /home/ | grep tar.gz)

for archive in $archives


/scripts/restorepkg --force $archive


Now a quick explanation. We make a list of all the archives and push it into the variable archives then go through the list item by item by starting the unpacking for each archive separately. Nobody knows how complicated and interesting why the guys from Cpanel did not use such a solution for many files.