Today I will write a light reading for php cache on html. Here we are talking about caching the output of our code and not as I wrote to cache scripts to opcode level with eAccelerator. So what are we talking about – let's quickly recall the work of php. We apply to web server-a he accepts the parameters that we submit then he submits them to the php script he compiles and spits the result in html version. This is in pretty general terms. What would be our idea here to skip queries, to skip large blocks or not so big blocks by directly drawing the already compiled output. The benefits are obvious – reduced execution time, less workload and resource consumption. In general, it is not the discovery of hot water, nor is it anything that complicated. There are many classes for this purpose such as PHP Pear Cache_Lite which has great functionality but I think in the future to write mine with a much lighter structure and my requirements for caching. We will now consider the most aboriginal variant with Output Control Functions. So let's cache something –
//start cache all output after that will be saved
echo 'Some dynamic output';
echo 'Some other dynamic output ...';
//assign output into variable
//close cache output
The above code is trivial but let's explain what happened. First we declare from which part of the code the caching starts. Then we generate the output of the code in a standard way. Then the generated output is joined to a variable that will be available later, whether through a file or through sessions, this is your decision.. Finally, we clear and stop caching. Quite a trivial operation if, say, cache generation goes through huge blocks of code so we can save a lot of CPU time by caching for a while or for a session. It's all about what you want, whether it's a public cache or a different user.
The next article may be the tip of the iceberg, but as I've always said, I'm a better admin than an encoder.. Вчера докато си дращех едни AJAX скрипт и трябваше да изпращам малко данни които за валидация им ползвам хеш понеже скрипта не споделя едни $_SESSION масив и нещата стават леко несигурни. So I do the following pork all sizes that are submitted by POST or GET I make them an md5 hash of the concatenated parameters and then I compare it. Overall not a bad scheme I think. Hashing algorithms for this purpose pain MD5 , SHA, DESC and others. So far, things are clear, that we will use MD5 to hash the parameters (as in reality I do). We have 3 the parameter that is passed through GET i = 1 n = 2 m = 3 and let's say the hash string is 123 which makes us the following MD5 hash 202cb962ac59075b964b07152d234b70. So far nothing who knows what interesting. This hash will fall in a few seconds in any attack. Here comes the salt and pepper of my simple idea. Let's say I take the first and last symbol of the string and swap their places, this way we get a 002cb962ac59075b964b07152d234b72 hash which if someone hasn't read our code what kind of idiocy is done things get rough when trying to hack. In reality, the hash is different and even if it is sniffed, it is extremely useless. But why stop here we can divide the hash into several blocks, in this case MD5 is long 32 the character if divided by 4 block on 8 the symbol and their places are moved becomes even more unpleasant. By far the most pleasant effect is, that visually it's a standard md5 hash and the evil hahor can break as long as he wants. I'm not good at encryption and I can't plead for something who knows how fundamental and so on, but I like how simple it is as an idea and implementation and modern reliability is critically high not like the normal MD5 which with a more literate video card breaks for standard.
Here is a sample code for the first idea by swapping the first and last symbol elementary code from 3 line 🙂
$hash = '202cb962ac59075b964b07152d234b70';
$first = substr($hash,0,1);
$last = substr($hash,-1);
$rest = substr($hash,1,30);
$hash = $last.$rest.$first;
echo "The real hash is : $str <br> inverted hash is : $hash";
Running a project that is actively programmed without version control nowadays is complete madness. In general, there are many bazaar options , mercurial , git , svn . So here if you expect me to explain which version control is better and why it won't be. We use git. There are many reasons – easy to set up, it is very flexible, was written by Linus Torvalds to serve Linux Kernel versions, the latter are at least 2 reasons 😉 . Today I had to create a new repository, that a new project has started. I actually created a few repositories a long time ago when we needed them and I forgot the thin moments about it. I create the repository I crashed a few files for the first storage everything went exactly. The storage setup itself was standard:
In general, nothing is wrong. Then I decided to test from a remote machine to store content and when I tried to push it, it exploded with the ugly message:
Pushing to git://gitHost/project
remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require ‘git reset –hard’ to match
remote: error: the work tree to HEAD.
remote: error: You can set ‘receive.denyCurrentBranch’ configuration variable to
remote: error: ‘ignore’ or ‘warn’ in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: ‘receive.denyCurrentBranch’ configuration variable to ‘refuse’.
! [remote rejected] master -> master (branch is currently checked out)
error: failed to push some refs to ‘git://gitHost/project’
So obviously I'm trying to commit to the main tree of the project and the software politely cuts me. In general, I have no intention of doing an additional brunch because the people involved in the projects are clear and a number of other reasons.. Here comes the moment to note that I have defined the title very incompetently, but this is another point. In general, the solution to the problem is trivial. In .git / config of your project you need to add the following directive:
Today I had to demonstrate a simulation through Cisco Packet Tracer on a machine that was not installed. In general, stupidity is, that the Cisco stimulator is for x86 machines and for me the machine was x64. When trying to install, he dies with the ugly message
Attempting to install package now dpkg: error processing PacketTracer-5.3_3.i386.deb (–install):
package architecture (i386) does not match system (amd64)
Errors were encountered while processing:
In general, it's obvious that the Debian package doesn't want to be installed because it's for a different architecture. From now on the problem is clear dpkg + forced installation to bypass the error for a different platform. The bin file of the installer is really just an unzipped archive that is unzipped in /tmp/selfextract.XXXXX folder where XXXXX is an arbitrary string. This directory contains the .deb file of the Packet Tracer. The installation is done with the command