Using PHP logging

Every PHP developer learns early in his/her career to use var_dump(), print_r() and similar methods of debugging the code. These are easy to use and invaluable in writing anything but the simplest applications. However, sometimes dumping a variable is not a good option.

For example, at the moment I’m working on developing an API for a Symfony application. I’m using curl from the terminal to interact with this API. The problem is that when Symfony is in development mode it prints a lot of pretty HTML to the browser when there’s an error, which is a bit inconvenient when you are in the terminal. Piping the output to ‘more’ and paging through the HTML code in order to find the error message is slow and tedious process. Another example is working on an AJAX application which communicates in JSON or XML with the server. Dumping debug data into server responses would break JSON or XML.

In these cases, it is best to log errors or our debug data to a log file. In this example I’ll be using the default log file, set in php.ini. You could also output data directly to a file specified by you, for example:


This would create the specified file right next to the PHP file you are debugging, convenient when all you have to work with is an SSH terminal on a remote server.

So, we must first make sure that error logging is enabled and that PHP can log to the specified log file. Two (three) PHP directives are important here, log_errors must be set to On and error_log must point to a file. error_logging must also be set. Please note that you must select the right php.ini file, PHPINFO reveals which php.ini files are loaded, so check that first if in doubt. After you reload the server configuration, check if PHP can actually log errors to a file. Load up a PHP file that is intentionally faulty and see if this error was logged. One of the common issue that will cause errors not to be logged is the permissions. Your web server must be able to write to the PHP log file, as well as you if you are running CLI PHP scripts. For development servers you can set 666 permissions to the log file, but for production servers you should be more careful.

Now, this will catch all the errors that would otherwise be hard to see, but you can also log arbitrary data to this log file and this is where all the power of this thing comes in. First step is to dump the variable you are debugging to a variable that you are going to log:

$dump = var_export($a_variable,true);

This is necessary when dumping arrays or objects. The second step is to log this to the log file:


That’s all there is to it.

The log file is just a text file, any text editor can open it and many text editors will automatically reload it once it changes, so you can see it in (almost) real time. However, I use the terminal whenever I can, so I just use the tail command in streaming mode (tail -f).

Read More

When mv command fails

I’m a lazy blogger and it’s been a long pause, but here I go again…

I’ve been with Linux for quite some time. In fact, I’ve banished Windows from all my machines and I’m running only Linux now. So, I considered myself to be fairly familiar with the command line., however, surprises are still possible.

Today I was uploading large amount of files to a web server. Because of the way server is set, the application directories are writable only by sudo users and I didn’t find a way of making Filezilla use the sudo command. SCP wasn’t an option either. So I had to upload everything to my home directory on that machine and then login via SSH and move the files. Easy, right, it’s just:

sudo mv ~/files/* /var/www/app/files/

Well, not quite. This command returned an error:

bash: /bin/mv: Argument list too long

I checked what I’ve typed, mv expects two arguments, input file/folder and output file/folder. How could this error be reported then, I haven’t made an error.

Short  online search gave me a hint, mv command can fail when it has to move too many files. I checked the folder where I’ve uploaded them, and there were several thousands. By the way, you can count the files in a directory with this command: “ls -l | wc -l”, subtract two from the number you get and that’s how many files you have in the current directory (wc -l counts lines, it will also count “.” and “..”).

So, what to do now, how to move files when mv command has failed and there is nothing else available. The dump way of solving this would be to feed mv one small subset of files, like this:

sudo mv ~/files/0* /var/www/app/files/
sudo mv ~/files/1* /var/www/app/files/
sudo mv ~/files/2* /var/www/app/files/

In my case, I had files with hex character set (0-9, a-f), so there were 16 possible combinations of the first character of the file name. There weren’t that many files in my case and this approach would have been possible, but it’s still tedious. In my opinion, the best way is to use rsync. Another good way is to use oneliner shell script, this approach is also useful for other actions, such as changing permissions.

Here’s an example of oneliner shell script:


for file in source/*; do cp $file dest/ ; done

Read More