Convenient And Secure Temporary Firewall Exceptions

April 11, 2011 under Main

I have a client for which I provide a managed server. Part of that service includes managing security and access controls. In this case the only public-facing service is the Web server, so we only have ports 80 and 443 (HTTP and HTTPS) open to the world.

Web site updates are pushed to the server via a secure SSH connection, with the SSH service locked down to the Webmaster’s public IP address, which is static. This works well, except for the occasional instance where the webmaster needs to update the site while out of the office, for example from a conference or meeting.

One approach to this problem is to provide a VPN service for the client to use while off-site. This has worked well in other cases, but also has its drawbacks. We’ve found many public Wifi hotspots do not support GRE packets, which are required for a PPTP VPN connection.

The solution I came up with this morning is to write a small CGI script which I placed on the server, accessible via SSL and protected with password authentication. The concept is quite simple: it reads the source IP of the client connection, and adds an exception to the iptables firewall allowing access from that IP to the SSH service.

Step 1: configure sudoers to allow the user which runs the CGI script to execute the iptables command:

$ sudo visudo
# On CentOS 5, there is a line: Defaults    requiretty
# If you also have an entry like this in your sudoers file, comment it out, or override it
# for your Web server userid:
# Defaults    requiretty
# or:
Defaults:myusername !requiretty
# Otherwise your CGI won't be able to use sudo. The latter option is safer.

# Then, find the section listing users and the commands they can run.
# It should start after this line:
# ## Allow root to run any commands anywhere
# Add an entry for your web server user.
# on CentOS 5, the default user would be 'apache', but I suggest
# you use suexec to configure a virtual host that runs under
# a different userid specifically for running this CGI
myusername  ALL=(ALL)       NOPASSWD: /sbin/iptables

Step 2: grab a copy of my CGI script and install it somewhere appropriate on your Web server. You may want to edit the chain, rule number and port variables to suit your ruleset. Don’t forget to password protect it, for example using this directive in a .htaccess or in your virtual host definition:

<Files "ip.cgi">
        AuthType        Basic
        AuthName        "ip.cgi"
        AuthUserFile    /var/www/html/.htpasswd
        Require         valid-user
</Files>

Step 3: hit the URL for your script, enter your login/password, and check ‘sudo /sbin/service iptables status’ to check that it has added the rule. To delete a test rule, you can use ‘sudo iptables -D <chain> <rule number>’

Step 4: I only want the exception to last until midnight, so I added an entry to root’s crontab to reload the saved ruleset at midnight:

$ sudo crontab -e
# Reload iptables - flush any temporary IP restrictions
0 0 * * * /sbin/service iptables restart

And that’s all there is to it, a relatively simple solution for allowing temporary access to a server port with no administrator intervention required. The client can simply bookmark the URL for the script and run it any time access is required from outside the office.

comments: 0 »

Uptime Monitoring

April 8, 2011 under The 100% Uptime Challenge

At this point, I’m reasonably confident that I have a reliable and fault-tolertant server setup powering this blog. I’ve got 3 servers in 3 countries which are each capable of keeping the blog running, even if the other 2 should fall of the face of the ‘net.

Each server has its own copy of the PHP files and MySQL database allowing it to operate independently. MySQL replication is smart enough to resume automatically after a connection failure, and I know how to verify my data integrity to confirm no conflicts have arisen.

I’ve tested a number of operating systems and browsers to see how they behave in theoretical outage scenarios.

With all this in place, it’s time to start the challenge and see if I can reach 100% uptime!

I tested out 3 monitoring services: Site24x7, SiteUptime, and Pingdom. While they all are straight forward to set up and seem at first glance to do their jobs perfectly, there is a problem: none of them behave exactly like a browser. To be more specific, none of them tried the alternate IPs when I tested shutting down a server. They then reported www.cwik.ch was down, even though all the browsers I tested connected instantly to one of the available servers.

I have set up all 3 monitoring services anyway. Chances are, even if there is an outage on one of the servers, at least one of the monitors will still connect. Until I find a better solution, I will consider the test a failure only if all 3 monitors detect an outage.

The monitoring intervals I’ve set up are:
Pingdom: 1 minute
SiteUptime: 2 minutes
Site24x7: 3 minutes

All 3 services offer a “badge” which I’ve placed in the left hand sidebar.

PS. I contacted Pingdom support to enquire about the above mentioned problem. I got an answer back indicating they were aware of the limitation and are keeping the problem in mind for a potential future fix, but at this point there is no way to have Pingdom try additional IPs should the first one fail to respond. Their support person pointed me to the Round-robin DNS article on Wikipedia and brought up the point that there is no agreed standard on how a client should handle this kind of situation, and he is quite right. That doesn’t change the fact that my testing indicates all major browsers and OS’s fully support this kind of failover, so the lesson is caveat emptor. The result you desire is not guaranteed.

Subscribe