Back to Windows

My first PC had Windows XP on it and for a long time I used mostly Windows. I played with Linux in dual boot setup and on virtual machines and over time I was using Linux more and more. In recent time I stopped using Windows and about two years ago I wiped the last remains of Windows 7 for my laptop. I knew I would completely abandon Windows when Windows 8 came out. Now I only have an XP virtual machine, that I don’t even use.

I’ve been hearing a lot of smart people say some nice things about Windows 10 and now I got the chance to test it out. No, I wasn’t interested in the latest version of Windows at all, I just needed to run an application that’s not available for Linux (Debloater for Android) and that wouldn’t run properly in the emulator.

So, I borrowed a laptop and got to work. The glorious Windows 10 spent some 15 minutes installing drivers for my phone. Then, Windows required a reboot, because drivers were not installed properly. That was pretty annoying, but OK, I rebooted it. And then, Windows spent 35 minutes installing updates. Incredible!

To compare, on Linux, I can instantly access the phone when I connect it over USB. Rebooting is rarely required when upgrading and reboots are quick. The only time I spent significantly long time upgrading Linux was when I was upgrading Debian 8 to Debian 9, which is to be expected for upgrading the whole OS. How can Windows be so bad!? And you have to pay for it!!

I hope I won’t have to touch Windows again anytime soon!

Read More

Quick tips: move multiple files/folders with mv command

One of the first things that new users of Linux will learn is using mv command to move or rename files. It’s simple enough. Many people will then continue using it in this simple way, but like almost everything in Linux, it’s more powerful than it seems. Here’s is today’s quick tip: How to move multiple files or folders to another folder.

Let’s say that we have multiple files with different names and we want to move then to a directory. Simple approach would be to run mv multiple times, but we don’t want that. mv has -t switch which is used to specify target and then we can list multiple sources.

1
mv -t target_dir source_file_a source_file_b c_file

This will move to “target_dir” those three files.

Read More

Multiple versions of PHP on Debian/Ubuntu

For some reason, when you search online for how to setup multiple versions of PHP on your Debian or Ubuntu server you will find many articles that state that you have to compile PHP manually. This is not true, as I will demonstrate. Compiling is not necessary, it’s complicated for less experienced users and requires you to install many additional packages on your server.

Having multiple versions is actually quite easy. You just need to install two prerequisites:

 

1
apt-get install apt-transport-https ca-certificates

Get the GPG key

1
wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg

Add custom repository:

1
echo 'deb https://packages.sury.org/php/ jessie main' >> /etc/apt/sources.list.d/php.list

Update apt cache:

1
apt-get update

And now you are able to install different versions of PHP alongside each other. For example you could install PHP 7.1 and php extension for this version like this:

1
apt-get install php7.1 php7.1-fpm php7.1 php7.1-mysql

PHP 7.1 will be installed to /usr/bin/php7.1 and symlink will be made in /etc/alternatives that will enable you to call this version of PHP from the command line with just ‘php’ command. Other versions you install from sury.org will be set up in similar manner. One thing is worth mentioning here – when setting cron jobs that execute PHP scripts via CLI you should set absolute path (e.g. /usr/bin/php7.0), or the version of PHP will change at some point if you install more version of PHP or upgrade your server and that could potentially cause problems with some PHP applications.

Configuring your web server to use this version of PHP is not that hard and there are multiple ways to do this. For example, this is how I configured Apache for an application that requires PHP 7.1:

1
2
3
<FilesMatch \.php$>
  SetHandler "proxy:fcgi://127.0.0.1:9071"
</FilesMatch>

PHP-FPM is configured to listen to loopback network interface on port 9071 in /etc/php/7.1/fpm/pool.d/www.conf by commenting out the default socket and adding a new entry, like this:

1
2
;listen = /run/php/php7.1-fpm.sock
listen = 127.0.0.1:9071

That’s all there is to it.

Read More

Removing passwords from SSH keys and converting .ppk to .pem

SSH keys are a great thing. They improve security (provided that passwords are disabled) and they save you the drudgery of having to enter password each time you connect to your server. With a little tweaking of ~/.ssh/config file, you can connect to your server just by typing “ssh” followed by a space and a few letters for the hostname of your server, followed by Tab key. That’s only a few key strokes and it’s really fast. Furthermore, if you want to run any sort of automated scripts (SSH, SCP, Ansible…), you pretty much have to have password-less key.

First thing that irks me is when I get password protected private key from a client. Most of the time that’s generated from cPanel (ugh!) where keys must have password. This sounds like a good idea at first, but it’s really just an annoyance. cPanel generates longish random passwords for SSH keys, which you cannot remember, so you have to put write it down either in a password manager, or in plaintext (bad idea). If someone had compromised your PC, or intercepted your email, they are going to get to your SSH key, so this doesn’t offer any real protection. On the other hand, you have to enter the password each time you are logging in. I keep SSH keys on an encrypted storage which is protected by a strong password and an external key, so, that’s reasonably secure.

Fortunately, it’s easy to remove this password, it’s just one simple command:

1
ssh-keygen -p -P 'old-pass' -N '' -f <key_filename>

Another annoying thing is when you get .ppk key. .ppk keys are used in putty. This little program is great for connecting to your SSH server when you are condemned to use windows. Compared any terminal emulator on any Linux distro, putty is ugly and awkward. Fortunately, .ppk key can be converted to .pem key with one simple command (provided that you have putty installed):

1
puttygen key.ppk -O private-openssh -o key.pem

Read More

Simple automated backup solution

There are many tools today that can be used to backup your data. Most of them come with shiny eye-candy GUIs and with a few clicks you can synchronize your data to Dropbox, Google Drive or wherever you want. So, why not use some of them and end this blog post right here? First of all, these solutions are boring, then there is the problem of giving your data to third parties (call me crazy, but I’m never going to upload private SSH keys to Google) and finally I wanted to have daily snapshots. So, I wrote a small shell script that does the job.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#!/bin/bash

## Automated backup script
##
## Uploads backed up archived to the server, runs daily
##
## Author Milos Milutinovic
## 2017-02-07

#put all backup dirs here
bkp_start_dir='/home/milos/tmp/bkp'
bkp_sec_dir='/home/milos/secure/tmp/bkp'

#daily dir
today=`date +"%Y-%m-%d"`

cd $bkp_start_dir

#create archives into this dir
tar -cjf ssh.tar.bz2 /home/milos/.ssh

#encrypt SSH archive
rm ssh.tar.bz2.gpg #remove old
gpg --passphrase-file /home/milos/secure/keys/gpg.key --simple-sk-checksum --batch -c ssh.tar.bz2
rm ssh.tar.bz2

# secure
cd $bkp_sec_dir
rm sec.tar.bz2.gpg #remove old
tar -cjf sec.tar.bz2 /home/milos/secure
gpg --passphrase-file /home/milos/secure/keys/gpg2.key --simple-sk-checksum --batch -c sec.tar.bz2
rm sec.tar.bz2

cd $bkp_start_dir

#create one daily archive
tar -cjf $today.tar.bz2 ssh.tar.bz2.gpg /home/milos/scripts /home/milos/Documents/AC /home/milos/secure/tmp/bkp/sec.tar.bz2.gpg /home/milos/Documents/db1/code

#scp to the server
scp -p -i /home/milos/secure/keys/bkpuser $today.tar.bz2 bkpuser@miloske.tk:/path/to/folder/

Let me explain it. First interesting bit is line 15. This is how archive name is generated, it will be in format YYYY-MM-DD. Then I archive my ~/.ssh folder and encrypt it with gpg, using symmetric encryption with a passphrase file stored in an secure location. I have to remove the encrypted archive from the previous day and after encrypting it, I remove the plaintext one.

I then do similar thing with another location I want to backup securely and finally, on line 37, I create an archive that contains all of the data. You might say that for creating those encrypted archives, I didn’t have to use bzip2 option (create .tar archive instead), as they would be packed into the final archive, but think again. Those archive are encrypted, if I was creating tar archive (which are compressible) and then encrypting them, I wouldn’t be able to compress them. Random data is not compressible.

Another approach would be to create a folder each day and put several archives in it, then upload the folder to the server. This would be a bit more efficient, as it would avoid running bzip2 compression on archives that are already compressed (and encrypted), but the difference in negligible and having all files instead of folders means that it’s a lot easier to get rid of the old files on my server. On the server, I just have this kind of thing in a file in /etc/cron.daily:

1
find /var/www/miloske.tk/bkp/ -mtime +15 | xargs rm

This deletes any files older than 15 days in this location.

In the end, I scp data to my server. I’m uploading only one file, so rsync is not necessary. I do use rsync on my home backup server to pull the data from the online server, but here I’m synchronizing several folders, so I need rsync. This script is set to run as cron job on my work machine, so I always have backups of important files.

Read More

What to do with a very old computer

Recently I obtained an old PC that’s pretty much useless for anything. It has 1.5 GB of RAM, single core CPU and 13 GB hard drive. Even 10 years ago this was pretty weak. But this old machine can still be useful. By adding another network card I made router for my home network, that is also DNS server, file backup server and hopefully something more. Here’s how I’ve set it up:

First off, I started by installing Debian, without GUI since it was not needed and installing some basic software (I have to have vim everywhere). I try to use Ansible for every step of the setup, I keep separate folders with playbooks for each server. That way I also have documentation of what was installed.

During the installation, I chose a static IP, so that I would know where to connect over SSH. I have only one functioning monitor now, which I had to use during the installation, but I wanted to connect it back to my main PC as soon as possible, so everything other than the installation was done over SSH. I configured the network in the following way:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 192.168.2.1
netmask 255.255.255.0
network 192.168.2.0
broadcast 192.168.2.255
dns-nameservers 127.0.0.1

# The secondary network interface
allow-hotplug eth1
iface eth1 inet static
address 192.168.1.3
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1

eth0 is connected to my private network, notice that the gateway is not specified here. Also note that I’ve set the DNS to localhost (more on that later). eth1 is plugged into my ADSL modem.

I wanted DHCP on the private network, although I’m using static addresses for most of the devices. I installed ISC DHCP server:

1
apt install isc-dhcp-server

set INTERFACES=”eth0″ in /etc/default/isc-dhcp-server and configured /etc/dhcp/dhcp.conf like this:

1
2
3
4
5
6
7
option domain-name "milos.lab";
option domain-name-servers 192.168.2.1;
...
subnet 192.168.2.0 netmask 255.255.255.0 {
range 192.168.2.150 192.168.2.199;
option routers 192.168.2.1;
}

Finally, I had to restart the service /etc/init.d/isc-dhcp-server restart, and that was it. If you want to check if it’s up and running, you can see if it’s listening to port 67 (netstat -tulnp).

I also wanted to have local DNS server. I use it for ad blocking (I got list of some 2500 domains) and local name resolution. Query caching is another (yet small) benefit. I chose dnsmasq, bind9 would have been an overkill here and having only one config file is much better option. I’ll write another blog post on how I maintain a database of ad servers and how I generate dnsmasq.conf files automatically.

Now, the main part of the setup, routing. I found very nice script here, that was almost perfect for my case.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#!/bin/sh

PATH=/usr/sbin:/sbin:/bin:/usr/bin

#
# delete all existing rules.
#
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -X

# Always accept loopback traffic
iptables -A INPUT -i lo -j ACCEPT

# Allow established connections, and those not coming from the outside
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW -i eth0 -j ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT

#drop all incoming
iptables -A INPUT -i eth1 -j REJECT

# Allow outgoing connections from the LAN side.
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT

# Masquerade.
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

# Don't forward from the outside to the inside.
iptables -A FORWARD -i eth1 -o eth1 -j REJECT

# Enable routing.
echo 1 > /proc/sys/net/ipv4/ip_forward

My only modification (line 22 here) was to add a rule for rejecting all the incoming traffic. This machine is behind NAT, but (call me crazy if you will) I never put too much trust in embedded devices (backdoors, bugs, no security updates…) and this feels cleaner.

This script must be executable and it goes into /etc/network/if-up.d. It will be executed at each boot.

This was actually the first time I’ve set up routing and DHCP server on a Debian box, and I have to admit that I expected some problems after I’ve rebooted the machine, but to my surprise and delight everything worked. The only problem I have is that it’s heating up the small space where I’ve put it.

This is one of the key benefits of Linux. With this old machine, I would not be able to use recent versions of Windows, I’d be forced to use an old version, XP or Server 2003, which are no longer supported. Instead, I have an OS that is using only a fraction of available resources, it is more secure than Windows will ever be and it’s all free.

Read More

Using custom private SSH key for git

When you are working with git, either your private or your company’s git server or with github it is much nicer to be able to push/pull/clone without having to enter the password every time. Furthermore, SSH keys are safer. However, the default option is to keep the private key in ~/.ssh/ folder which is not encrypted (unless your /home folder is encrypted). SSH client has the -i which allows you to specify the location of your private key, but this won’t work with git.

Fortunately there is a way. All you need to do is create one config file (called ‘config’) in ~/.ssh/. Here is how it should look like:

1
2
3
4
host miloske.tk
  HostName miloske.tk
  IdentityFile /home/milos/secure/my-key
  User git

 

Read More

Using LetsEncrypt for automated certificate management

Starting from the 2017, Google Chrome has started marking each site with unencrypted HTTP connection that requires entering credentials as unsafe. I’m sure many webmasters see this as an unnecessary hustle, especially those who don’t care about the security. However, using HTTPS is really easy and nowadays, it’s free. With LetsEncrypt you can even use automated scripts for renewing your certificates, so here’s how to set it up.

First of all, your are going to need an ACME client. You can choose any client from this list. I’ve chosen kelunik/acme-client written in PHP and this is what I’m going to be using for this tutorial. The easiest way to install this on your production server is to download it to /usr/local/bin directory and rename acme-client.phar to acme-client. Now you will be able to call it globally on your server as “acme-client” command. To download it, choose the latest .phar from here.

The next step is to write a YAML config file in /etc/client-yml. Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# storage for certs
storage
: /etc/secret

server
: letsencrypt

email
: milos.milutinovic@live.com

certificates
:

# milos.pw
- bits
: 4096
paths
:
/var/www/milos.pw/public_html
:
- milos.pw
- user
: www-data
paths
:
/var/www/milos.pw/public_html/
: milos.pw

Once that is done, you can get the your certificates for the first time by running “acme-client auto”. If there were no errors, you will get the certificates in the specified folder.

The next step depends on your web server. In this example I will set up Apache. You will have to edit each vhost config file (this is an example, not a whole file and parts are censored).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#redirect to https
ServerName milos.pw
ServerAlias www.milos.pw
Redirect permanent / https://milos.pw/
ServerName milos.pw
ServerAlias www.milos.pw

ServerAdmin ########
DocumentRoot /var/www/milos.pw/public_html

SSLEngine On
SSLCertificateFile /etc/##########/cert.pem
SSLCertificateKeyFile /etc/#######/key.pem
SSLCertificateChainFile /etc/#####/chain.pem

 

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

After you’ve done editing, test the configuration

1
sudo apachectl configtest

and reload Apache

1
sudo service apache2 reload

Now go to your site. It should redirect http to https and there should be no security errors. If everything went smoothly, congratulations, your visitors can now connect securely to your site and no one can sniff their traffic. If there were errors, leave a comment below and I’ll try helping.

The final step is creating a cron job that will renew the certificates automatically for us. LetsEncrypt certificates are valid for 3 months, so it would be very tedious to do this by hand, not to mention that you can easily forget to do it. Acme-client comes with very nice instructions for doing this.

Read More

Redirecting output of cron jobs

Cron jobs are very useful (indispensable I would say), but a common issue people have with cron jobs is that they are getting mails about errors produced by the scripts run by cron jobs. Also, the output is sent to syslog and this can be a serious problem when you have a script that executes often and produces long error messages.

Simplest solution to this problem is to redirect output of a cron job to /dev/null, which is a sort of a black hole that discards anything written to it. To do this, set you cron jobs like this:

1
* * * * * /home/user/myscript.sh > /dev/null 2>&1

This will redirect both STDOUT (1) AND STDERR (2) to /dev/null.

Sometimes however you want to redirect output to a log file of your choice. In that case, the code would look like this:

1
* * * * * /home/user/myscript.sh >> /var/log/custom/my.log 2>&1

Note that this time we are using “>>” instead of “>” for redirection. The important difference between the two redirection operators is that “>” will overwrite data each time, while “>>” will append them to the file.

Read More

Shell scripting: working with files that have spaces in their names

Shell scripts are very useful for manipulating files, among other things. However, problem arises when some of the files in your target directories have spaces in their names. The error you will get in such cases is:

1
./test.sh: line 8: [: too many arguments

This sounds somewhat cryptic, but it’s perfectly logical, bash is using spaces as separators, so each word in a file name is seen as another argument. The workaround is to change the value of the internal IFS variable (which stands for Internal Field Separator). Here is an example that works:

 

1
2
3
4
5
6
7
8
9
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for file in /var/www/html
do
  #do something here
  ...
done
IFS=$SAVEIFS

Read More