DN42 is a wonderful project that enables you to develop your skills without scraping the BGP product environment, without you having to have expensive devices with which to make your lab to do simulations with GNS3. At the same time is not a pure laboratory environment where there is no real world problems. Participate with 1 node in the project for about a year. One of the problems in the project is 1:1 with the real world – When someone announced prefixes that do not need to declare. Because I'm lazy and don't feel like typing on hand time filters, I thought the problem with a simple bash script that generates a prefix-list with name dn42 and pour the valid prefixes.

vtysh -c 'conf t' -c "no ip prefix-list dn42"; #drop old prefix list

while read pl
vtysh -c 'conf t' -c "$pl"; #insert prefix list row by row
done < <(curl -s https://ca.dn42.us/reg/filter.txt | grep -e ^[0-9] | awk '{ print "ip prefix-list dn42 seq " $1 " " $2 " " $3 " ge " $4 " le " $5}' | sed "s_/\([0-9]\+\) ge \1_/\1_g;s_/\([0-9]\+\) le \1_/\1_g");
vtysh -c 'wr' #write new prefix list

The list of valid prediksi into https://ca.dn42.us/reg/filter.txt from the main conveyor + little mods on my part to be able to generate a prefix list. The commands are executed in vtysh.

For the 4th consecutive year, will take place the Conference on free software and hardware TuxCon. Personally for me it is the most strong Plovdiv Conference which is held, Since it is targeted for developers only, and the target group is much bigger and the audience is very colorful. If memory serves me correctly I don't think, I missed Edition so far. This year's Edition is more special to me, Since I have a presentation. I'm going to talk about dnsdist and as to whether it is useful for your infrastructure. The theme of which will speak it chose Sam. I felt the need to show it to the world, as it is a relatively young, and so far I've barely found something in it that I don't like. I don't remember when was the last time he impressed me so much something new and at the same time works extremely well.

As you know CentOS 5 EOL is (End-Of-Life) from March 31 2017. Which leads to the following very interesting problem:

# yum update
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
&nbsp;Eg. Invalid release/
YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
&nbsp;Eg. Invalid release/
removing mirrorlist with no valid mirrors: /var/cache/yum/extras/mirrorlist.txt
Error: Cannot find a valid baseurl for repo: extras


The problem is that short lists of CentOS mirrors 5 already kicking in and attempt to directly get content obtained after refusal:

# curl 'http://mirrorlist.centos.org/?release=5&arch=i386&repo=os'
Invalid release


In general overall the most prudent idea to reinstall the tin with a normal distribution that supports working distributive upgrade. Unfortunately mine is not the case and it does not stand as an option on the table. So we had to play a little gypsy scheme – begin to use Vault mirror. At the moment completely clear creature and sanity know, I will not receive any updates that is not the aim of the exercise, and just want to have working with yum to install package that I need. For this purpose commented out all mirrorlist variables and add baseurl in /etc/yum.repos.d/CentOS-Base.repo. Finally we get yum repo on the type of

name=CentOS-$releasever - Base

#released updates
name=CentOS-$releasever - Updates

#additional packages that may be useful
name=CentOS-$releasever - Extras

Finally play a yum clean all && yum update. If everything ended without an error then we successfully completed the scheme and we can safely install outdated packages.



Идеятаеидентичнакактовпостами VACUUM Firefox databases and REINDEX. for some time mozilla debian dumped ребрадинраните versions of products. in migration of icedowe to thunderbird thinking, i haven't дефрагментирал base, so far, through email client leaked serious amount of letters, email accounts and servers, users and passwords. The script is identical to the one from my previous post with only slight modification for where to look for the files 🙂

Linux version

for db in $(find ~/.thunderbird/$(grep Path ~/.thunderbird/profiles.ini | cut -d'=' -f2) -maxdepth 1 -name "*.sqlite" -type f)
echo "VACUUM ${db}"
sqlite3 ${db} VACUUM
sqlite3 ${db} REINDEX

Mac os version

for db in $(find ~/Library/Thunderbird/$(grep Path ~/Library/Thunderbird/profiles.ini | cut -d'=' -f2)  -maxdepth 1  -name "*.sqlite" -type f)
echo "VACUUM && REINDEX ${db}"
sqlite3 "${db}" VACUUM;
sqlite3 "${db}" REINDEX;

Unlike the Firefox profile folder of the Thunderbird is a much more correct way (without space) and it is not necessary to change the delimiter.

Ever since google started to like https sites, having more mass installation of SSL-and where you can. Overall, in addition to more harassment for servers we have and degradation in speed. The good thing is, that HTTP2 the standard for more than a year and a half is integrated in all major browsers and servers and http support sufficiently stable. Unfortunately there is no stable debian packages to keep in the main http servers HTTP2. The versions that are necessary for us to operate HTTP2 are as follows:

Mešanicata to me is great and according to be used depends on apache or nginx. I'm still not playing to let loose on the http2 apache debian 8 Since I've never had but have it so repoto backports, It won't be a big problem. For nginx has already played several times. Overall, the steps are few and relatively simple:

  1. Add nginx official repo – in debian is 1.6 x vesiâta. 🙄
  2. Install openssl yourself from backports is currently 1.0.2 (k) – What we need for alpn maintenance for all works and is fast
  3. you install the devscripts – This is the time to share that will bildnem our package because the official is compiled with openssl 1.0.1 t which does not work ALPN and not the browsers respond well and works only if http2-revving it
  4. inkrementirame the version to do not hold packages such as ciganiâta and there's a new version only to sinkenm sorsovete

Let's start step by step

Add nginx repo

deb http://nginx.org/packages/debian/ codename nginx
deb-src http://nginx.org/packages/debian/ codename nginx

Add a k dev openssl library 1.0.2 and otherwise bildnem it again with 1.0.1 I t is the target

echo 'deb http://ftp.debian.org/debian jessie-backports main' | tee /etc/apt/sources.list.d/backports.list

apt update && apt install libssl-dev -t jessie-backports


Now stuck to his add libraries needed for compilation of nginx

apt install devscripts

apt build-dep nginx

mkdir nginx-build

cd nginx-build

apt-get source nginx

If you are working correctly you should have a structure like

~/nginx-build # ll
total 1004
drwxr-xr-x 10 root root   4096 Feb 21 18:37 nginx-1.10.3
-rw-r--r--  1 root root 103508 Jan 31 17:59 nginx_1.10.3-1~jessie.debian.tar.xz
-rw-r--r--  1 root root   1495 Jan 31 17:59 nginx_1.10.3-1~jessie.dsc
-rw-r--r--  1 root root 911509 Jan 31 17:59 nginx_1.10.3.orig.tar.gz

Enter PPTA in which users code nginx in my case, this nginx-1.10.3 run the command with which incrementare version, I personally prefer to add 1 to this build

debchange --newversion 1.10.3-1

After you add a changelog and can proceed to the actual compilation

debuild -us -uc -i -I -b -j6

A little clarification on the configuration of the command:

-us -uc they say the script not to “signed” .dsc and changes files.. -i and -I make the script to ignore files for version control. -B to generate a binary only package. -j as with make how many parallel process to recompile 🙂


Once you've completed the above process should we install our new packages. If you have already installed nginx is better to uninstall it

apt remove nginx nginx-*

Also not a bad idea to make a backup of the nginx folder under /etc. In principle, when updating 1.6.5 to 1.10.3 I didn't have drama, but you never know. New Partei are in the higher-level directory, and must be installed with a command like:

dpkg -i ../*.deb

If everything went smoothly, you just have to launch the nginx process, and to set http2 that is not the purpose of this article.