Setting up a CentOS 6 server to host a secure site

January 10, 2013 under Main

I recently had the task of setting up a CentOS 6 server which will be hosting a secure site. The site will be storing sensitive customer data including billing and credit card information, so security is critical.

The site will be Apache/PHP/MySQL, so a fairly standard LAMP stack. It will need to pass PCI compliance scans which involve a lot of port scanning, fingerprinting and attempts to break the web server by sending it unexpected requests.

While this list is in no way complete or authoritative, I thought I’d share a few of the steps I took to configure the server. The very first step was to install a clean CentOS 6 x86_64 and apply all the latest updates available by running ‘yum update’. I then installed mysql-server, php, php-mysql (and other php-* modules we need), httpd and mod_ssl. I enabled the servers I needed and disabled everything else, until I ended up with just:

# chkconfig --list|grep :on
crond              0:off    1:off    2:on    3:on    4:on    5:on    6:off
httpd              0:off    1:off    2:on    3:on    4:on    5:on    6:off
iptables           0:off    1:off    2:on    3:on    4:on    5:on    6:off
mysqld             0:off    1:off    2:on    3:on    4:on    5:on    6:off
network            0:off    1:off    2:on    3:on    4:on    5:on    6:off
rsyslog            0:off    1:off    2:on    3:on    4:on    5:on    6:off
sshd               0:off    1:off    2:on    3:on    4:on    5:on    6:off
udev-post          0:off    1:on    2:on    3:on    4:on    5:on    6:off

When it comes to security, the fewer servers running the better, as less code = fewer potential vulnerabilities and less servers = fewer potential exploitation points.

1. Firewall
CentOS ships with iptables which is a very capable and flexible firewall. There are lots of ways to configure iptables, but my personal favourite is to just open up the config file and write out the rules by hand. This way I can be completely confident that the firewall is doing what I intended it to do, nothing more and nothing less, and not what some configuration tool thinks will be a good configuration for me. The config file on CentOS is /etc/sysconfig/iptables and I ended up with the following rules:

*filter
:FORWARD ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state -m tcp --dport 22 --state NEW -s 10.2.3.0/24 -j ACCEPT
-A INPUT -p tcp -m state -m tcp --dport 443 --state NEW -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

Enable the firewall using ‘chkconfig iptables on; service iptables start’

2. SSH
You may have noticed the only 2 ports I opened in the firewall are 443 for https (SSL web server) and 22 for SSH, but that the rule for port 22 has a source network restriction in place. This restriction limits access to the SSH server to connections from the local LAN. In this case that LAN is in a remote datacenter, and we can access the LAN using a VPN connection. This adds a layer of security to the already secure SSH server. If you don’t have a VPN available another good option is to restrict access to your static IP(s). If you don’t have a static IP, see Convenient And Secure Temporary Firewall Exceptions for another good solution.

The next step we took is to set up key based authentication by generating a keypair on our workstation using ‘ssh-keygen -t rsa’, then copying the public key part of the keypair to ~/.ssh/authorized_keys on the server. The .ssh directory, if it doesn’t already exist, must be chmod to 700 and authorized_keys to 600 – if you forget this step SSH will fail to use the keys as the permissions are insecure!

On our workstation we set up an alias for the host by opening (creating if it doesn’t exist) the file ~/.ssh/config and inserting:

Host secureserver
HostName 10.2.3.123
IdentityFile ~/.ssh/mynewprivatekey
Port 22
User mylogin

(real values obscured)

We can now log in to the server simply by typing ‘ssh secureserver’. Once key based authentication was working, I disabled password authentication entirely by editing /etc/ssh/sshd_config and setting PasswordAuthentication to ‘no’, and restarting sshd. SSH is now locked down to our local LAN IP range and to users in possession of the correct private key. This should be pretty secure, and the PCI scanner won’t even pick up on the existence of the SSH server as the firewall will block any connection attempts from the WAN.

In addition to using SSH for administering the server, we’ll also use the SFTP subsystem to upload files to our website. This means we don’t need to run a separate FTP server – one less server is only a good thing from a security perspective. We’ll also use the SSH server to tunnel connections to our MySQL server. MySQL Workbench (free from dev.mysql.com) supports this type of connection of out the box, as do some other MySQL clients.

3. Apache and PHP
One of the things the PCI vulnerability scanners tend to do is try to fingerprint your Web server to see what software you have installed and what versions you are running. They then compare the version numbers against a database of known vulnerabilities. There are 2 problems with this approach: 1. it does not make any attempt to verify whether you are actually vulnerable and 2. the RHEL/CentOS philosophy of freezing software versions at release time then backporting security fixes means that the stock versions of things like Apache and PHP available from the CentOS yum repositories are never the latest ones available. This causes the vulnerability scanner to go wild saying you’re vulnerable to dozens of things that have actually been patched long ago. You can verify this if you are curious by looking at the changelogs which list the CVE numbers of security fixes which have been backported, eg. ‘rpm -q –changelog httpd’.

So let’s make their job a little harder by limiting what information our Web server discloses:
1. Edit /etc/php.ini and set “expose_php = Off”. This prevents PHP from adding a line to the HTTP response headers declaring its presence on the server.
2. Edit /etc/httpd/conf/httpd.conf and set:
– “ServerTokens Prod” – hide the version number from the HTTP response headers
– “ServerSignature Off” – hide the server name and version from server generated responses such as errors, directory listings, etc.

The HTTP headers now look like this:

HTTP/1.1 200 OK
Date: Thu, 10 Jan 2013 09:58:33 GMT
Server: Apache
Last-Modified: Tue, 08 Jan 2013 21:58:40 GMT
ETag: "85af-348-4d2ce0c66c400"
Accept-Ranges: bytes
Content-Length: 840
Connection: close
Content-Type: text/html; charset=UTF-8

Good – the version number is no longer reported and the existence of PHP is also omitted.

Also in /etc/httpd/conf/httpd.conf, I commented out all the modules which we don’t need, such as ldap, webdav, usertrack, userdir, status, info, vhost_alias, autoindex, speling, proxy, cache and version. This step will depend on what features of Apache you intend to use. I also removed all the configuration stuff we don’t need such as (in our case) the entire virtual hosting and proxying sections.

The SSL server configuration lives in /etc/httpd/conf.d/ssl.conf on a default install and this is where I set up the SSL virtual host. Of particular note in the SSL server configuration is to disable all weak encryption and outdated SSL implementations. This is important for any secure site and also something the PCI scan will pick up on if you leave the insecure defaults. I ended up with the following virtual host definition:

<VirtualHost 1.2.3.4:443>
        DocumentRoot "/var/www/mysite/public_html"
        ServerName myhostname:443

        <Directory "/var/www/mysite/public_html">
                Options FollowSymLinks
                AllowOverride None
        </Directory>

        # Use separate log files for the SSL virtual host; note that LogLevel
        # is not inherited from httpd.conf.
        ErrorLog logs/ssl_error_log
        TransferLog logs/ssl_access_log
        LogLevel warn

        #   SSL config
        SSLEngine on
        SSLProtocol -ALL +SSLv3 +TLSv1
        SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:-MEDIUM
        SSLCertificateFile ssl/mysslcert.crt
        SSLCACertificateFile ssl/mycertprovider.ca
        SSLCertificateKeyFile ssl/myprivate.key

        CustomLog logs/portal_request_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
</VirtualHost>

4. MySQL
MySQL by default includes an anonymous user with access to the ‘test’ database, and a root user without a password! (granted, access is limited to the local server, but still not very secure). So the first thing I did after installing mysql-server was log in as root and secure the server:

- USE mysql
- DELETE FROM user WHERE user='';
- DELETE FROM db;
- UPDATE user SET Password=PASSWORD('mysupersecretpassword');
- FLUSH PRIVILEGES;

Then proceed to set up any databases and appropriate access control lists as required.

I also copied /usr/share/doc/mysql-server-5.1.66/my-large.cnf to /etc/my.cnf and used this as a basis for my MySQL server configuration, I found this a reasonable basis for our requirements on this server.

Summary
This is not an exhaustive list of steps we took to secure this server and will probably not apply in full to you as every deployment is unique. I hope however that some of the information in this post is useful, and as always I welcome any comments, suggestions and feedback.

comments: 4 »

Scheduled MySQL server backups with cron and mysqldump

December 20, 2012 under Main

This is a task I perform fairly regularly, but not regularly enough to remember exactly which permissions are needed, so I invariably end up having to look up the MySQL reference manual.

The task is to set up a scheduled backup of a MySQL server using mysqldump and cron. The backup should contain not only a copy of all databases on the server but also the state of the binary logs at the time, ie. using the –master-data option. I find binary logging + regular full snapshots of the MySQL server to be a great and simple backup strategy that allows not only for restoring the entire server to a known good state (from the snapshot backup) but also restoring to any point in time using the binary logs.

For example, let’s say you run a daily backup at midnight. At 13:26 a friendly Web developer (definitely not you) made a typo in an SQL query and wiped out a whole column full of very important data.

With snapshots + binary log, you can:

1. Restore the snapshot from midnight
2. Use the mysqlbinlog command to extract all the data-affecting queries that were run on the server between the time the backup ran and the time when the erroneous query was made. You can do this by starting your search from the log position recorded at the top of your backup file, and ending with the position just before the bad query. Use mysqlbinlog and grep to find the latter position. Pipe all this extracted data into another sql file.
3. Import this sql file into your MySQL server.
4. You now have your server restored to the exact state it was in immediately prior to the nefarious query.

Sounds good? Here’s how to set up this backup scheme:

1. Set up a MySQL user with permission to run the backups, nothing more:

mysql> GRANT SELECT, LOCK TABLES, SHOW VIEW, RELOAD, SUPER, REPLICATION CLIENT ON *.* TO ‘backup’@’localhost’;

mysql> FLUSH PRIVILEGES;

2. Set up a cron job: [root@myserver ~]# crontab -e
# MySQL backup
0 0 * * * mysqldump -u backup -A –master-data -v | gzip > /backups/current/MySQL/backup.sql.gz

Change the path to taste. Or if you prefer to store 7 days worth of backups, you could do something like:

0 0 * * * mysqldump -u backup -A –master-data -v | gzip > /backups/current/MySQL/backup-`date +\%a`.sql.gz

This will name your backups with the day of the week in the filename, eg. backup-Mon.sql.gz. See ‘man date’ for formatting options.

Esc, :w and :q to save your crontab and exit.

An explanation of the options to mysqldump:

-u: The MySQL user name to use when connecting to the server.
–master-data: Use this option to dump a master replication server to produce a dump file that can be used to set up another server as a slave of the master. It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server. These are the master server coordinates from which the slave should start replicating after you load the dump file into the slave.
-A: Dump all tables in all databases.
-v: Verbose mode. Print more information about what the program does.

MySQL dump files are plain text which compress really well, so I feed the data into gzip before writing to disk.

It is worth noting that in order for mysqldump to take a consistent snapshot using –master-data, –lock-all-tables is automatically turned on: “Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump.”

If you have a large amount of data, the backup may take a considerable amount of time, during which any INSERT/UPDATE/DELETE operations on your databases will be blocked. The queries will be shown in the process list as “Waiting for release of readlock” and will execute when your backup has completed, if the connection has not yet timed out. To avoid this problem, I typically recommend to my customers to run a separate MySQL server specifically as a disaster recovery and/or backup system. This can be on a separate server, a small VM, or even on the same server as the primary but bound to a different port. Set the backup server to replicate from the master using MySQL’s built-in replication system then schedule your backups to run on the backup server instead of the master. This way you will avoid any disruption to your live system and still get clean snapshots. While your backup is running the replication slave process will have to wait for the table locks, but as soon as the backup is finished replication will automatically resume so your slave catches up with the master again.

comments: Comments Off on Scheduled MySQL server backups with cron and mysqldump

Script to convert ECSL CSV files exported from MYOB on Mac to HMRC compatible CSV

December 4, 2012 under Main

Every 3 months we have to send Her Majesty’s Revenue and Customs (HMRC) in the UK a list of customers in the EU to which we have sold our services, this list is called the ECSL or European Community Sales List. Unfortunately due to choices made long ago which are non-trivial to change, we use a program called MYOB on a Mac for our book keeping. Unlike some of its competitors, MYOB can’t upload the ECSL data directly to HMRC, so we have to enter the data into forms manually.

This is a very tedious job, and since I don’t like tedious things, I set out to find a better solution. HMRC offer a facility whereby you can automate the data entry by importing a CSV file formatted a specific way. MYOB can export the ECSL to CSV, but the format is not compatible with the HMRC system.

The solution I came up with was to write a small Perl script to convert from the MYOB format to the HMRC format. Posted here, in case anyone else finds it useful!

#!/usr/bin/perl -w

# Input file (MYOB export) should be passed as an argument to this script
my $infile = $ARGV[0];
chomp($infile);

my $outfile = $infile . '-hmrc.csv';

# Ask for year and month for this submission
print "Year: ";
my $year = <STDIN>;
chomp($year);
print "Month: ";
my $month = <STDIN>;
chomp($month);

# open output file
open(HMRCFILE, '>', $outfile) or die "Could not open output file: $!";

# Write header
my $vatregno = "your_number_here";
my $subsidiary = "000";
my $name = "Christopher Wik";
print HMRCFILE "HMRC_VAT_ESL_BULK_SUBMISSION_FILE\n";
print HMRCFILE "$vatregno,$subsidiary,$year,$month,GBP,$name,0\n";

# Set record separator to \r (Mac) - MYOB saves in this format
$/ = "\r";

# Convert data from MYOB to HMRC format
open(MYOBFILE, $infile) or die "Could not open input file: $!";
while(<STDIN>) {
  next if !/,\w\w\d+/;
  my ($cust,$vat,$amount) = split(/,/,$_,3);
  my $country = substr($vat, 0, 2);
  my $vatno = substr($vat, 2);
  $amount =~ /(\d+)/;
  $amount_num = $1;
  $amount =~ /(-)/;
  if( $1 eq '-' ) { $amount_num *= -1; }
  print HMRCFILE "$country,$vatno,$amount_num,3\n";
}
close(MYOBFILE);

# close HMRC output file
close(HMRCFILE);
comments: Comments Off on Script to convert ECSL CSV files exported from MYOB on Mac to HMRC compatible CSV

Multi-file, multi-line find/replace with Perl

November 27, 2012 under Main

A customer recently contacted me for assistance. Their PC had contracted a virus which had installed a keylogger, which in turn had been used to steal their FTP password. The attacker logged into their FTP account and infected almost every file in the site with a couple lines of Javascript which attempted to install a trojan on the PC of anyone visiting their site.

Unfortunately they didn’t have a clean copy of the site files and since the infection had happened some time ago, our own backups only had infected copies of their files. This left no option but to try to clean up their site by removing the infection.

Thankfully the attacker had thoughtfully surrounded every line of inserted code with HTML comments. This made the cleanup easier as we could attempt a find/replace across all files replacing everything between the start and end comments with an empty string, thus removing the code from the site. An example:

<!--7e5a0c-->
<script type="text/javascript" language="javascript" >nefarious code here</script>
<!--/7e5a0c-->

My first thought was to use a GUI editor like BBEdit on Mac which has a great find/replace function, but the site had many thousands of files totaling 2.6GB. Downloading, cleaning and re-uploading would take hours.

Instead I turned to my trusty friend Perl in collaboration with find and xargs. The solution ended up to be very simple, just one line on the Linux terminal:

find /path/to/webroot -type f -print0 | xargs -0 perl -0777 -i -pe ‘BEGIN{undef $/;} s/7e5a0c.*7e5a0c//smg’

Breaking this down, we get:

find: find everything of type file in the given directory and print this list, using the null character as a separator instead of newline so xargs doesn’t choke in files with spaces in their names.

xargs: feed the results of the find command as an argument to the following command. -0 says expect null as separator instead of newline.

perl: -0777 sets slurp mode, ie. read the input file in one go. -i says replace in-place (don’t write out a new file), -p says iterate over given files in a sed-like manner, and -e says execute the perl code given as the following argument. The code has 2 parts: 1. undef the record separator which defaults to newline so we can match mutliple lines. 2. A find/replace regular expression (s = substitute). Modifiers (after the  regex): m = treat string as multi-line, s = treat string as a single line, g = global matching. See perlre for more details.

The whole thing took just a few seconds to complete, searching 11028 files and removing all instances of the infection. Success!

comments: Comments Off on Multi-file, multi-line find/replace with Perl
Subscribe