Quick tips: move multiple files/folders with mv command

One of the first things that new users of Linux will learn is using mv command to move or rename files. It’s simple enough. Many people will then continue using it in this simple way, but like almost everything in Linux, it’s more powerful than it seems. Here’s is today’s quick tip: How to move multiple files or folders to another folder.

Let’s say that we have multiple files with different names and we want to move then to a directory. Simple approach would be to run mv multiple times, but we don’t want that. mv has -t switch which is used to specify target and then we can list multiple sources.

1
mv -t target_dir source_file_a source_file_b c_file

This will move to “target_dir” those three files.

Read More

Multiple versions of PHP on Debian/Ubuntu

For some reason, when you search online for how to setup multiple versions of PHP on your Debian or Ubuntu server you will find many articles that state that you have to compile PHP manually. This is not true, as I will demonstrate. Compiling is not necessary, it’s complicated for less experienced users and requires you to install many additional packages on your server.

Having multiple versions is actually quite easy. You just need to install two prerequisites:

 

1
apt-get install apt-transport-https ca-certificates

Get the GPG key

1
wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg

Add custom repository:

1
echo 'deb https://packages.sury.org/php/ jessie main' >> /etc/apt/sources.list.d/php.list

Update apt cache:

1
apt-get update

And now you are able to install different versions of PHP alongside each other. For example you could install PHP 7.1 and php extension for this version like this:

1
apt-get install php7.1 php7.1-fpm php7.1 php7.1-mysql

PHP 7.1 will be installed to /usr/bin/php7.1 and symlink will be made in /etc/alternatives that will enable you to call this version of PHP from the command line with just ‘php’ command. Other versions you install from sury.org will be set up in similar manner. One thing is worth mentioning here – when setting cron jobs that execute PHP scripts via CLI you should set absolute path (e.g. /usr/bin/php7.0), or the version of PHP will change at some point if you install more version of PHP or upgrade your server and that could potentially cause problems with some PHP applications.

Configuring your web server to use this version of PHP is not that hard and there are multiple ways to do this. For example, this is how I configured Apache for an application that requires PHP 7.1:

1
2
3
<FilesMatch \.php$>
  SetHandler "proxy:fcgi://127.0.0.1:9071"
</FilesMatch>

PHP-FPM is configured to listen to loopback network interface on port 9071 in /etc/php/7.1/fpm/pool.d/www.conf by commenting out the default socket and adding a new entry, like this:

1
2
;listen = /run/php/php7.1-fpm.sock
listen = 127.0.0.1:9071

That’s all there is to it.

Read More

What to do with a very old computer

Recently I obtained an old PC that’s pretty much useless for anything. It has 1.5 GB of RAM, single core CPU and 13 GB hard drive. Even 10 years ago this was pretty weak. But this old machine can still be useful. By adding another network card I made router for my home network, that is also DNS server, file backup server and hopefully something more. Here’s how I’ve set it up:

First off, I started by installing Debian, without GUI since it was not needed and installing some basic software (I have to have vim everywhere). I try to use Ansible for every step of the setup, I keep separate folders with playbooks for each server. That way I also have documentation of what was installed.

During the installation, I chose a static IP, so that I would know where to connect over SSH. I have only one functioning monitor now, which I had to use during the installation, but I wanted to connect it back to my main PC as soon as possible, so everything other than the installation was done over SSH. I configured the network in the following way:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 192.168.2.1
netmask 255.255.255.0
network 192.168.2.0
broadcast 192.168.2.255
dns-nameservers 127.0.0.1

# The secondary network interface
allow-hotplug eth1
iface eth1 inet static
address 192.168.1.3
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1

eth0 is connected to my private network, notice that the gateway is not specified here. Also note that I’ve set the DNS to localhost (more on that later). eth1 is plugged into my ADSL modem.

I wanted DHCP on the private network, although I’m using static addresses for most of the devices. I installed ISC DHCP server:

1
apt install isc-dhcp-server

set INTERFACES=”eth0″ in /etc/default/isc-dhcp-server and configured /etc/dhcp/dhcp.conf like this:

1
2
3
4
5
6
7
option domain-name "milos.lab";
option domain-name-servers 192.168.2.1;
...
subnet 192.168.2.0 netmask 255.255.255.0 {
range 192.168.2.150 192.168.2.199;
option routers 192.168.2.1;
}

Finally, I had to restart the service /etc/init.d/isc-dhcp-server restart, and that was it. If you want to check if it’s up and running, you can see if it’s listening to port 67 (netstat -tulnp).

I also wanted to have local DNS server. I use it for ad blocking (I got list of some 2500 domains) and local name resolution. Query caching is another (yet small) benefit. I chose dnsmasq, bind9 would have been an overkill here and having only one config file is much better option. I’ll write another blog post on how I maintain a database of ad servers and how I generate dnsmasq.conf files automatically.

Now, the main part of the setup, routing. I found very nice script here, that was almost perfect for my case.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#!/bin/sh

PATH=/usr/sbin:/sbin:/bin:/usr/bin

#
# delete all existing rules.
#
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -X

# Always accept loopback traffic
iptables -A INPUT -i lo -j ACCEPT

# Allow established connections, and those not coming from the outside
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW -i eth0 -j ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT

#drop all incoming
iptables -A INPUT -i eth1 -j REJECT

# Allow outgoing connections from the LAN side.
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT

# Masquerade.
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

# Don't forward from the outside to the inside.
iptables -A FORWARD -i eth1 -o eth1 -j REJECT

# Enable routing.
echo 1 > /proc/sys/net/ipv4/ip_forward

My only modification (line 22 here) was to add a rule for rejecting all the incoming traffic. This machine is behind NAT, but (call me crazy if you will) I never put too much trust in embedded devices (backdoors, bugs, no security updates…) and this feels cleaner.

This script must be executable and it goes into /etc/network/if-up.d. It will be executed at each boot.

This was actually the first time I’ve set up routing and DHCP server on a Debian box, and I have to admit that I expected some problems after I’ve rebooted the machine, but to my surprise and delight everything worked. The only problem I have is that it’s heating up the small space where I’ve put it.

This is one of the key benefits of Linux. With this old machine, I would not be able to use recent versions of Windows, I’d be forced to use an old version, XP or Server 2003, which are no longer supported. Instead, I have an OS that is using only a fraction of available resources, it is more secure than Windows will ever be and it’s all free.

Read More

Redirecting output of cron jobs

Cron jobs are very useful (indispensable I would say), but a common issue people have with cron jobs is that they are getting mails about errors produced by the scripts run by cron jobs. Also, the output is sent to syslog and this can be a serious problem when you have a script that executes often and produces long error messages.

Simplest solution to this problem is to redirect output of a cron job to /dev/null, which is a sort of a black hole that discards anything written to it. To do this, set you cron jobs like this:

1
* * * * * /home/user/myscript.sh > /dev/null 2>&1

This will redirect both STDOUT (1) AND STDERR (2) to /dev/null.

Sometimes however you want to redirect output to a log file of your choice. In that case, the code would look like this:

1
* * * * * /home/user/myscript.sh >> /var/log/custom/my.log 2>&1

Note that this time we are using “>>” instead of “>” for redirection. The important difference between the two redirection operators is that “>” will overwrite data each time, while “>>” will append them to the file.

Read More

Shell scripting: working with files that have spaces in their names

Shell scripts are very useful for manipulating files, among other things. However, problem arises when some of the files in your target directories have spaces in their names. The error you will get in such cases is:

1
./test.sh: line 8: [: too many arguments

This sounds somewhat cryptic, but it’s perfectly logical, bash is using spaces as separators, so each word in a file name is seen as another argument. The workaround is to change the value of the internal IFS variable (which stands for Internal Field Separator). Here is an example that works:

 

1
2
3
4
5
6
7
8
9
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for file in /var/www/html
do
  #do something here
  ...
done
IFS=$SAVEIFS

Read More

When mv command fails

I’m a lazy blogger and it’s been a long pause, but here I go again…

I’ve been with Linux for quite some time. In fact, I’ve banished Windows from all my machines and I’m running only Linux now. So, I considered myself to be fairly familiar with the command line., however, surprises are still possible.

Today I was uploading large amount of files to a web server. Because of the way server is set, the application directories are writable only by sudo users and I didn’t find a way of making Filezilla use the sudo command. SCP wasn’t an option either. So I had to upload everything to my home directory on that machine and then login via SSH and move the files. Easy, right, it’s just:

1
sudo mv ~/files/* /var/www/app/files/

Well, not quite. This command returned an error:

1
bash: /bin/mv: Argument list too long

I checked what I’ve typed, mv expects two arguments, input file/folder and output file/folder. How could this error be reported then, I haven’t made an error.

Short  online search gave me a hint, mv command can fail when it has to move too many files. I checked the folder where I’ve uploaded them, and there were several thousands. By the way, you can count the files in a directory with this command: “ls -l | wc -l”, subtract two from the number you get and that’s how many files you have in the current directory (wc -l counts lines, it will also count “.” and “..”).

So, what to do now, how to move files when mv command has failed and there is nothing else available. The dump way of solving this would be to feed mv one small subset of files, like this:

1
2
3
4
sudo mv ~/files/0* /var/www/app/files/
sudo mv ~/files/1* /var/www/app/files/
sudo mv ~/files/2* /var/www/app/files/
...

In my case, I had files with hex character set (0-9, a-f), so there were 16 possible combinations of the first character of the file name. There weren’t that many files in my case and this approach would have been possible, but it’s still tedious. In my opinion, the best way is to use rsync. Another good way is to use oneliner shell script, this approach is also useful for other actions, such as changing permissions.

Here’s an example of oneliner shell script:

 

1
for file in source/*; do cp $file dest/ ; done

Read More

Why people use XAMPP and Vagrant

If you are new to the world of web development and don’t know much about Linux, your best choice for setting up a development server is with XAMPP. On the other hand, a great deal of more experienced developers use Vagrant (and Puppet or other configuration managers). Both of these solutions have one thing in common – they are easy to use and this is what you want if your only interest is writing code.

My choice is a bit different. I like tinkering with Linux, so I use virtual machine with Debian. I install and set up everything from scratch. Not only do I find this fun, but I get better understanding of everything involved in running a web server.

My approach naturally involves more work. In any given week I spend more time pounding commands, editing config files and reading manuals/scratching my head than users of “pre-cooked” solutions. Most of the time this is not too bad, but last week I decided to increase the size of one of the virtual disks and that took me a bit more time than I had expected.

So, here is the hard and slow way of increasing the size of virtual hard drive under VMware:

VMware and other virtualization systems have the ability to create dynamic virtual hard drives, which start off small and grow as you fill them with files. This is good option for saving space on your real hard drive, but these things fragment like crazy, so I always pre-allocate virtual drive space. Because HDD space is precious commodity for me, I created relative small virtual drives for my server, which worked well for almost a year, but I needed to expand them. VMware has an option to “grow” the virtual disks, which I tried once and ended up corrupting my files. This feature only grows the disk itself. You have to resize partition manually and this is not always safe.

I chose the safer route. I created a larger virtual disk and planned to copy the files from the old, small one to it. I decided to use Parted Magic for this because of the nice collection of tools it has, which in retrospect wasn’t a good idea. I formatted the new hard drive, mounted it in RW (read/write) mode and copied all my files. I had to edit /etc/fstab in Parted Magic to be able to do this.

Then I shut down the VM, removed the old small disk from the list and placed the new one in the same SCSI slot. I fired up Debian, but it wouldn’t start. After a minute or two of reading the docs/scratching my head, I found the problem. I had forgotten that in /etc/fstab, mount points are set by UUID (which is unique to each partition) and not by the device mount point (such as /dev/sdb5 for instance). I went back to PM and fixed this problem, again I had to first edit the fstab on PM so that the partition would be mounted in write mode.

This time I was able to start my Debian server, but some things were still wrong. My mysql server was down. After some checking I determined that the problem was in permissions. After I had copied the files with Parted Magic, everything in /home and /var was owned by root. Fortunately this is easily fixed (sudo) chown -R command gives the ownership of selected directory and all of its content to the given user. For your www you also have to change the group ownership, assigning it to the group “www-data” and also make sure that group has read access everywhere (chmod 755 or chmod g+r) and write access (775 / g+w) where it is necessary. /var/lib/mysql and /var/run/mysqld (or wherever you keep your mysql files) has to be accessible to the “mysql” user. The cron jobs won’t run until you have write permissions to /var/spool/cron.

Isn’t this fun?! You don’t get to do all this with other, user friendly solutions, especially with XAMPP/WAMP/MAMP and you just can’t appreciate the beauty of Linux until you go through this kind of experience.

Read More

Linux quick tips – setting time on Virtual Machine

There are several good reasons why I abandoned solutions such as XAMPP long time ago (more on that in some other post). I now use Debian and Ubuntu virtual machines as my development servers. I use VMware Workstation as my platform and no, I don’t use Vagrant. This setup works great, but it has one particularly annoying drawback.

Each time you put your (host) computer to sleep and turn it back again, the clock on your Linux server will stay on the time it was set when you powered down. It will simply “unfreezes” when you turn on your computer. VMware is supposed to have solution for this – VMware tools works great on Windows virtual machines, but for some reason that solution does not work for Debian and Ubuntu.

Having to run a command such as:

1
sudo date -s "2015-09-30 21:04"

each time you power on your computer is simply unthinkable and so is this:

1
sudo ntpdate pool.ntp.org 

although there is a bit less typing. Once you’ve installed ntpdate you can put the previous command in a script (yes, together with “sudo”) and run that each time, but even that is too tedious. Most of us developers are too lazy for that.

There is one solution that is not too hard. When Bash shell is started, that is when you log on to your Linux server, several scripts are executed, among which are ~/.bash_profile, ~/.bash_login and ~/.profile. The last one is not executed if one of the first two is present in your home folder. Any one of those three is a good place for the “sudo ntpdate pool.ntp.org” command.

There are of course some more advanced, fully automated solutions, such as installing NTP daemon on the Linux server, or querying the Windows host machine via SAMBA, but for me the described solution works. The only downside is having to type in the password twice when logging on (once for logon and once for sudo), but I can live with that (for now).

Read More

Linux quick tips – grep with color

Grep is tremendously useful utility for searching for text strings in files. It is one of those tools which once you learn how to use, you wonder how did you ever managed without it. However, grep can do more than just display the desired output, it can display it with colors. This may sound trivial, but depending on what you search, you can end up with a lot of content crammed up in your terminal window with no line breaks or indentation to help you read the text.

So, to make the output more readable, simply add –color option.

If you like this option so much that you start using it every time and like most people you don’t feel any pleasure in pounding your keyboard unnecessarily, you can add this alias to your ~/.bashrc (that is the .bashrc in your home directory):

1
alias grep='grep --color=auto'

Some Linux distributions (like Debian) already have this line, but it is commented. After you edit your .bashrc file you will need to log off and log in back again to activate these changes.

Read More