Critical security bug in PHPMailer

This one is pretty bad. Attackers could execute code remotely on your server thanks to a flaw in PHPMailer library. PHPMailer is used in a lot of places, including WordPress. I’ve disabled it preventively, as the most recent version of WP still includes the vulnerable PHPMailer. The downside is that my server will return error 500 when someone posts the comment and I won’t be notified by email, but the comments is posted and everything else works as expected.

Read More

Using PHP logging

Every PHP developer learns early in his/her career to use var_dump(), print_r() and similar methods of debugging the code. These are easy to use and invaluable in writing anything but the simplest applications. However, sometimes dumping a variable is not a good option.

For example, at the moment I’m working on developing an API for a Symfony application. I’m using curl from the terminal to interact with this API. The problem is that when Symfony is in development mode it prints a lot of pretty HTML to the browser when there’s an error, which is a bit inconvenient when you are in the terminal. Piping the output to ‘more’ and paging through the HTML code in order to find the error message is slow and tedious process. Another example is working on an AJAX application which communicates in JSON or XML with the server. Dumping debug data into server responses would break JSON or XML.

In these cases, it is best to log errors or our debug data to a log file. In this example I’ll be using the default log file, set in php.ini. You could also output data directly to a file specified by you, for example:

1
file_put_contents(__DIR__.'/log-file',$debug_info);

This would create the specified file right next to the PHP file you are debugging, convenient when all you have to work with is an SSH terminal on a remote server.

So, we must first make sure that error logging is enabled and that PHP can log to the specified log file. Two (three) PHP directives are important here, log_errors must be set to On and error_log must point to a file. error_logging must also be set. Please note that you must select the right php.ini file, PHPINFO reveals which php.ini files are loaded, so check that first if in doubt. After you reload the server configuration, check if PHP can actually log errors to a file. Load up a PHP file that is intentionally faulty and see if this error was logged. One of the common issue that will cause errors not to be logged is the permissions. Your web server must be able to write to the PHP log file, as well as you if you are running CLI PHP scripts. For development servers you can set 666 permissions to the log file, but for production servers you should be more careful.

Now, this will catch all the errors that would otherwise be hard to see, but you can also log arbitrary data to this log file and this is where all the power of this thing comes in. First step is to dump the variable you are debugging to a variable that you are going to log:

1
$dump = var_export($a_variable,true);

This is necessary when dumping arrays or objects. The second step is to log this to the log file:

1
error_log($dump,0);

That’s all there is to it.

The log file is just a text file, any text editor can open it and many text editors will automatically reload it once it changes, so you can see it in (almost) real time. However, I use the terminal whenever I can, so I just use the tail command in streaming mode (tail -f).

Read More

When mv command fails

I’m a lazy blogger and it’s been a long pause, but here I go again…

I’ve been with Linux for quite some time. In fact, I’ve banished Windows from all my machines and I’m running only Linux now. So, I considered myself to be fairly familiar with the command line., however, surprises are still possible.

Today I was uploading large amount of files to a web server. Because of the way server is set, the application directories are writable only by sudo users and I didn’t find a way of making Filezilla use the sudo command. SCP wasn’t an option either. So I had to upload everything to my home directory on that machine and then login via SSH and move the files. Easy, right, it’s just:

1
sudo mv ~/files/* /var/www/app/files/

Well, not quite. This command returned an error:

1
bash: /bin/mv: Argument list too long

I checked what I’ve typed, mv expects two arguments, input file/folder and output file/folder. How could this error be reported then, I haven’t made an error.

Short  online search gave me a hint, mv command can fail when it has to move too many files. I checked the folder where I’ve uploaded them, and there were several thousands. By the way, you can count the files in a directory with this command: “ls -l | wc -l”, subtract two from the number you get and that’s how many files you have in the current directory (wc -l counts lines, it will also count “.” and “..”).

So, what to do now, how to move files when mv command has failed and there is nothing else available. The dump way of solving this would be to feed mv one small subset of files, like this:

1
2
3
4
sudo mv ~/files/0* /var/www/app/files/
sudo mv ~/files/1* /var/www/app/files/
sudo mv ~/files/2* /var/www/app/files/
...

In my case, I had files with hex character set (0-9, a-f), so there were 16 possible combinations of the first character of the file name. There weren’t that many files in my case and this approach would have been possible, but it’s still tedious. In my opinion, the best way is to use rsync. Another good way is to use oneliner shell script, this approach is also useful for other actions, such as changing permissions.

Here’s an example of oneliner shell script:

 

1
for file in source/*; do cp $file dest/ ; done

Read More

Detecting AJAX calls in PHP

AJAX, or asynchronous page loading is a great way of improving the user experience on your site. No matter how fast the network is between your server and your users’ devices, there is always some lag in page loading times and every tenth of a second matters. With AJAX, you load only the relevant content and your site can approach the performance of desktop applications. This will also greatly reduce your traffic usage and server load.

However, not everyone can  benefit from AJAX and if you follow the Progressive enhancements strategy (as you should) you will want to be able to tell the difference between request made through AJAX and the ‘regular’ ones. The easiest way to achieve so is this:

1
2
3
4
5
6
if( isset($_SERVER['HTTP_X_REQUESTED_WITH']) AND strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest'){
    //AJAX call
}
else{
    //non-AJAX (regular) call
}

It is that simple. Strictly speaking it is not necessary to use isset() first, but you will get PHP Notice error if $_SERVER[‘HTTP_X_REQUESTED_WITH’] is not set.

Note that whatever detection method you use, hackers will always be able to spoof the requests, so don’t assume that calls made through AJAX are any safer than the ‘regular’ ones.

Read More

Why people use XAMPP and Vagrant

If you are new to the world of web development and don’t know much about Linux, your best choice for setting up a development server is with XAMPP. On the other hand, a great deal of more experienced developers use Vagrant (and Puppet or other configuration managers). Both of these solutions have one thing in common – they are easy to use and this is what you want if your only interest is writing code.

My choice is a bit different. I like tinkering with Linux, so I use virtual machine with Debian. I install and set up everything from scratch. Not only do I find this fun, but I get better understanding of everything involved in running a web server.

My approach naturally involves more work. In any given week I spend more time pounding commands, editing config files and reading manuals/scratching my head than users of “pre-cooked” solutions. Most of the time this is not too bad, but last week I decided to increase the size of one of the virtual disks and that took me a bit more time than I had expected.

So, here is the hard and slow way of increasing the size of virtual hard drive under VMware:

VMware and other virtualization systems have the ability to create dynamic virtual hard drives, which start off small and grow as you fill them with files. This is good option for saving space on your real hard drive, but these things fragment like crazy, so I always pre-allocate virtual drive space. Because HDD space is precious commodity for me, I created relative small virtual drives for my server, which worked well for almost a year, but I needed to expand them. VMware has an option to “grow” the virtual disks, which I tried once and ended up corrupting my files. This feature only grows the disk itself. You have to resize partition manually and this is not always safe.

I chose the safer route. I created a larger virtual disk and planned to copy the files from the old, small one to it. I decided to use Parted Magic for this because of the nice collection of tools it has, which in retrospect wasn’t a good idea. I formatted the new hard drive, mounted it in RW (read/write) mode and copied all my files. I had to edit /etc/fstab in Parted Magic to be able to do this.

Then I shut down the VM, removed the old small disk from the list and placed the new one in the same SCSI slot. I fired up Debian, but it wouldn’t start. After a minute or two of reading the docs/scratching my head, I found the problem. I had forgotten that in /etc/fstab, mount points are set by UUID (which is unique to each partition) and not by the device mount point (such as /dev/sdb5 for instance). I went back to PM and fixed this problem, again I had to first edit the fstab on PM so that the partition would be mounted in write mode.

This time I was able to start my Debian server, but some things were still wrong. My mysql server was down. After some checking I determined that the problem was in permissions. After I had copied the files with Parted Magic, everything in /home and /var was owned by root. Fortunately this is easily fixed (sudo) chown -R command gives the ownership of selected directory and all of its content to the given user. For your www you also have to change the group ownership, assigning it to the group “www-data” and also make sure that group has read access everywhere (chmod 755 or chmod g+r) and write access (775 / g+w) where it is necessary. /var/lib/mysql and /var/run/mysqld (or wherever you keep your mysql files) has to be accessible to the “mysql” user. The cron jobs won’t run until you have write permissions to /var/spool/cron.

Isn’t this fun?! You don’t get to do all this with other, user friendly solutions, especially with XAMPP/WAMP/MAMP and you just can’t appreciate the beauty of Linux until you go through this kind of experience.

Read More

IP logging plugin for WordPress

I’ve developed a small WordPress plugin for logging IP address, Referrer and User Agent strings to database. This plugin also displays total number of visits in WP Dashboard.

This kind of plugin is useful on hosting plans which don’t grant access to server logs. A while ago one of my blogs hosted on free hosting account got slammed with bot traffic. The free hosting plan had only 5 GB of monthly traffic and my blog was getting more than 2 GB per day. I couldn’t see the access parameters and had no way of doing anything.

This plugin saved my skin. It enabled me to track and block (with .htaccess)  about a dozen IP addresses and everything was fine afterwards.

The plugin is very simple and still a bit crude. You’d have to dig through database with phpmyadmin or in some other way to get to the data. I have plans for upgrades, but I have very little time.

You can get it from GitHub.

Read More

Flush footer to the bottom (not fixed) in Bootstrap

If you create a page with little amount of content, your footer is going to be places just below that content and it will leave empty space below it. This makes your page look really ugly, especially if you have dark footer. Footer should always be at the bottom, even if a page doesn’t have any content.

Bootstrap, despite its awesomeness, doesn’t have a ready made solution for this issue, but the fix is not hard to implement. All that is necessary is to add some CSS code. This will make your footer drop to the bottom if there is not enough content and if there is, the footer will behave normally.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
html {
position: relative;
min-height: 100%;
}

body {
/* Margin bottom by footer height */
margin-bottom: 180px;
}

footer {
position: absolute;
bottom: 0;
width: 100%;
/* Set the fixed height of the footer here */
height: 180px;
background-color: #222;
border-color: #080808;
}

I found this solution at getbootstrap.com.

Read More

Linux quick tips – setting time on Virtual Machine

There are several good reasons why I abandoned solutions such as XAMPP long time ago (more on that in some other post). I now use Debian and Ubuntu virtual machines as my development servers. I use VMware Workstation as my platform and no, I don’t use Vagrant. This setup works great, but it has one particularly annoying drawback.

Each time you put your (host) computer to sleep and turn it back again, the clock on your Linux server will stay on the time it was set when you powered down. It will simply “unfreezes” when you turn on your computer. VMware is supposed to have solution for this – VMware tools works great on Windows virtual machines, but for some reason that solution does not work for Debian and Ubuntu.

Having to run a command such as:

1
sudo date -s "2015-09-30 21:04"

each time you power on your computer is simply unthinkable and so is this:

1
sudo ntpdate pool.ntp.org 

although there is a bit less typing. Once you’ve installed ntpdate you can put the previous command in a script (yes, together with “sudo”) and run that each time, but even that is too tedious. Most of us developers are too lazy for that.

There is one solution that is not too hard. When Bash shell is started, that is when you log on to your Linux server, several scripts are executed, among which are ~/.bash_profile, ~/.bash_login and ~/.profile. The last one is not executed if one of the first two is present in your home folder. Any one of those three is a good place for the “sudo ntpdate pool.ntp.org” command.

There are of course some more advanced, fully automated solutions, such as installing NTP daemon on the Linux server, or querying the Windows host machine via SAMBA, but for me the described solution works. The only downside is having to type in the password twice when logging on (once for logon and once for sudo), but I can live with that (for now).

Read More

Using PHP’s built-in server from other computers

In PHP version 5.4 built in web server was introduced. This cool feature enables serving of PHP, HTML and other files without having a web server installed. It is simple and straightforward to use, just type

1
php -S localhost:8000

8000 here is a port number, you can choose any port that is available on your system. After running this command, you will be able to access this server only from your local machine, it won’t be accessible over network.

The intended usage of this feature is for demonstration and development. It’s not intended to be used in production. Most of the people use it only on their own machines, that is on machines which they use of development and this is probably why most books and tutorials only mention the above given command.

However, there might be a need to be able to access this server from other computers on the network and it is possible to do that as well. You just need to replace ‘localhost’ with the IP address of your network interface, or use ‘0.0.0.0’ to listen on all available networks. So, the command is:

1
php -S 0.0.0.0:8000

For more information see the manual.

Read More

Codeigniter – better way to load views

Codeigniter is great framework, but sometimes it feels like you have to do more work than it would be necessary. For example, loading views at first seems easy and straight forward.

1
$this->load->view('view_file', $data);

However, in a real world project you would rarely use only one view file, each controller would probably have to load views in multiple functions and there would be several controllers. It’s obvious we need something better.

The solution is easy, we need to define a method for loading views and put it in a base class that our controllers would extend. In Codeigniter this is done by creating My_Controller.php file and placing it in application/core/ directory. Our controllers can now extend MY_Controller instead of CI_Controller. All of these names are case sensitive. So, this is how this method looks like:

1
2
3
4
5
6
7
8
9
10
11
12
protected function loadViews($template, $data = null){
        if(!isset($data)){
            $data = $this->_viewData;
        }
        if(!isset($data['urls'])){
            $data = array_merge($this->_viewData, $data);
        }
        $this->load->view('common/header',$data);
        $this->load->view('common/navbar',$data);
        $this->load->view($template, $data);
        $this->load->view('common/footer', $data);
    }

loadViews() method doesn’t need to be public, so making it protected is the best choice. Note that I wasn’t following the Codeigniter naming convention here. All method names are written with underscores, I chose camel case deliberately to make my code distinct, you can make your own choices here.

Whenever we need to load views in our controllers we would simply type:

1
$this->loadViews('view_file', $data);

Now you might be wondering about $_viewData property. Here’s the whole MY_Controller file that I use in my projects:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class MY_Controller extends CI_Controller {

    protected $_viewData = array();    //holds data that is being passed to view files, read from config file

    public function __construct(){
        parent::__construct();
        $this->config->load('my_config', TRUE);    //my_config is my custom configuration file
        $this->setViewData();
    }

    private function setViewData(){
        $this->_viewData = $this->config->item('my_config');
    }  

        /**
        *    Loads views
        *
        *    @param string $template Name of the template file
        *    @param array $data Data array that is being passed to the views
        */

    protected function loadViews($template, $data = null){
        if(!isset($data)){
                        //if no data is passed, use only $_viewData
            $data = $this->_viewData;
        }
        if(!isset($data['urls'])){
                        //some data is passed, but this is not whole $_viewData array from the config file
                        //$data['urls'] is something I always have in my configuration files
            $data = array_merge($this->_viewData, $data);
        }
        $this->load->view('common/header',$data);
        $this->load->view('common/navbar',$data);
        $this->load->view($template, $data);
        $this->load->view('common/footer', $data);
    }

I use a configuration file to hold basic data such as navigation links, links to files, etc. I load this into MY_Controller and pass that array to views, so whenever I load views I have the same basic set of data, plus the data necessary for the current view. I also have a choice of calling loadViews() in my controllers without passing any data, passing only the data I need for the one specific view, or passing all of the (view) data. Controllers can, of course, override loadViews() method if that is necessary.

Codeigniter is very flexible and people are using it in many different ways. I’d just like to know how many other developers have come up with the same approach as I did.

Read More