Diet Planner

The Diet Planner is a simple javascript application that can calculate grams of fat, protien, and carbohydrates needed to gain or lose weight given various body metrics.

Comment

Brute Force SSH Attack Protection

This HOWTO describes a couple things that you can do to secure your SSH server on a Linux machine (Ubuntu, RedHat, Suse...).

This is useful because there are script kiddies all around trying to break into computers. And I imagine that botnets writers will take more interest in Linux as it's market share increases.

The pattern that I have seen is of many many requests from the same IP address trying to guess users and passwords. Most of the requests are trying to guess the root password.

There are a couple things we can do to slow these attackers. The most obvious is to configure ssh to only allow logins from a couple select users. And to disallow remote login by the root user. We can also use IPTables to only allow a limited number of connections per minute. And finaly, we can move the SSH server to a different port on the machine. I don't know if this actually causes the attackers any pause however. They may just be trying all of the open ports.

There are more complex solutions to the problem. Port knocking or log parsing come to mind. But I've opted for the simplest solution that doesn't impact usability in my case.

The use of IPTables to limit repeated connections is based on work by Kevin van Zonneveld. You can see his approach on his blog

What is IPTables

IPTables is part of the kernels network stack (I think). It is a user configurable state machine that can be used to filter packets as they are received, before they are forwarded or before they are transmitted.

Our configuration will drop incoming packets that meet a specific set of rules.

SSHD configuration

The file /etc/ssh/sshd_config is used to configure the ssh server on your linux machine. The changes I made to mine were to change "Port 22" to "Port xxxx" and to add "AllowUsers yyyy zzzz wwww" where xxxx is the new port you want SSH to listen to. yyyy, zzzz and wwww are the users that you want to have remove access. I also made sure that the line "PermitRootLogin no" existed and was not commented out.

SSH configuration

If you have changed the port that sshd listens to then you will probably want to configure your ssh clients on any machine that you would like to access your server from. In your home directory on each of these machines you should find "~/.ssh/". In that directory you can create a config file. It's just called config. Put the following in that file. Again, xxxx is the new port that your ssh server is listening to.

Host your.server.name
Port xxxx

Network scripts

In Ubuntu there are directories that contain scripts to run when an interface comes up or goes down. These are convenient places to put the IPTables commands needed to drop attackers packets. The directory for scripts to run when a network interface comes up is /etc/network/if-up.d. And the directory for scripts to run when a network interface goes down is /etc/network/if-down.d. We will create one file in the if-up.d directory and a symlink in the if-down.d directory. We do this to consolidate the logic in a single location. We can use the MODE variable to determine if the interface is coming up or going down.

In /etc/network/if-up.d/ssh-protection put the following.

#!/bin/bash

SSH_IFACE="eth1"
SSH_PORT=xxxx   # This should be the port you've moved your ssh server to, or 22 if you haven't moved it.
SSH_PERIOD=60
SSH_COUNT=8

#
# Only add the rules to the interface that SSH is actually listening on.
#
if [ "$IFACE" != "$SSH_IFACE" ]; then
    exit 0
fi

case "$MODE" in
    start)
        IPTABLES_ACTION="-A"
        ;;

    stop)
        IPTABLES_ACTION="-D"
        ;;
esac

/sbin/iptables $IPTABLES_ACTION INPUT \
               -i $IFACE \
               -p tcp \
               --dport $SSH_PORT \
               -m state \
               --state NEW \
               -m recent \
               --set \
               --name SSH
/sbin/iptables $IPTABLES_ACTION INPUT \
               -i $IFACE \
               -p tcp \
               --dport $SSH_PORT \
               -m state \
               --state NEW \
               -m recent \
               --update \
               --seconds $SSH_PERIOD \
               --hitcount $SSH_COUNT \
               --rttl \
               --name SSH \
               -j DROP

This file needs to be executable by root. You can use the following command line to make it so.

chmod u+x /etc/network/if-up.d/ssh-protection

And in /etc/network/if-down.d create a symlink to the ssh-protection file in if-up.d using the following command line. This command line assumes you're in the if-down.d directory.

ln -s ../if-up.d/ssh-protection
Comment

Port Forwarding

This HOWTO describes the steps required to setup your RedHat (well any Linux distro) firewall to forward the port used by gtk-gnutella to a machine on your internal network. This is useful because it allows your gtk-gnutella client to behave in a non firewalled mode and thus more of the gnutella network is available to you. In particular, other machines that are behind firewalls that can handle push requests will become available to you.

What is Port Forwarding?

Port forwarding is a feature of the IPTables system. It allows one computer to forward connections made to it so that another computer can actually process the request. If you want a very simple metaphor you can think of it as mail forwarding. Each computer has a number of addresses called ports, and IPTables allows (among other things) connections to these ports to be sent to another computer. The most common use of port forwarding that I am aware of is allowing servers to run on machines that would normally be hidden behind a firewall.

Firewall script

I am going to assume that you are using the default firewall script that comes with RedHat or whatever distribution you are running. My system is currently running RedHat 8.0 (heh, not anymore). And I am using a firewall script called rc.firewall-2.4. You should be able to find it in your /etc/rc.d directory. If you don't find it it may be that I had to install it and just don't remember. :) You can search for rc.firewall-2.4 on Google and find many copies.

Setup

My goal was to make gtk-gnutella work in a non-firewalled mode from within my firewalled LAN. To do this people suggested a line of the form:

$IPTABLES -t nat -I PREROUTING -p tcp -i $EXTIF --dport 6346 -j DNAT --to 192.168.0.2:6346

Where $IPTABLES is the iptables executable, $EXTIF is the external ethernet interface (I use two ethernet cards in my firewall), port 6346 is the gtk-gnutella port and 192.168.0.2 is the machine on my internal network on which I wished to run gtk-gnutella.

With the rc.firewall-2.4 script this doesn't quite work. The reason is that by default any connection that would open a new session from the outside world is dropped. This is done with the line:

$IPTABLES -P FORWARD DROP

The solution is to add a rule into the FORWARD chain that causes connections from the outside world to port 6346 to not be dropped, but instead to be accepted. Then the PREROUTING rule above will be encountered and the connection will be forwarded to the internal machine as desired. The line to accomplish this is:

$IPTABLES -A FORWARD -i $EXTIF -o $INTIF -p tcp --dport 6346 -j ACCEPT

I placed this right before the line:

$IPTABLES -A FORWARD -j LOG

I did this because this line will will add a rule that causes all rules added after it to not be checked.

Comment

Virtual Hosting and SSL

All right, now that we have some blogs up. We will want to enable SSL/TLS security, also known as https. The reason you want this may be obvious to some. But to spell it out for you, if you don’t use https to connect to your administrative pages in Movable Type while you’re sipping on your latte, then everyone else that’s on that wireless network can see you type your password in, plain as day. And by “can”, I mean there is nothing preventing them from watching your traffic. Most likely, no one is, but you never know.

There are a couple of complications that we will need to work through though. Firstly, SSL and name based virtual hosting are mostly incompatible. And secondly, unless you get a Certificate Authority, such as Verisign, to sign your SSL key you and your security conscious visitors will be presented with an ugly message from the browser saying that your site is trying to identify itself with an invalid security certificate.

So, why are SSL and name based virtual hosting mostly incompatible? Well, it turns out that the way name based virtual hosting works is for the browser to send to your web server the name of the server it’s trying to connect to. And then your web server uses that information to look through the list of virtually hosted domains until it finds a match. Then the page the browser wants is sent from the server to the browser. But if you have connected to the web server using an https connection then the communication channel needs to be encrypted. So the server needs to send the browser the public key of the domain the browser is trying to connect to. But the server doesn’t yet know what domain the browser is trying to connect to, because that information is part of what will eventually be encrypted and sent by the browser. Apache will just serve the first certificate it finds in this case. So all but the first domain in your list of virtually hosted domains will cause the browser to issue an additional warning, that the certificate is for the wrong domain. This isn’t a big problem for large hosting companies or business, they can just assign a separate IP address to each domain. And then use that extra information to configure the web server, allowing it to pass the correct domain specific key back to the browser. For us little guys, that’s not really a suitable approach. You can do something similar by having your web server listen on a bunch of different ports, one for each domains SSL connection. But then your URLs will have to have the port number in them as well. Not a really classy solution. There is a solution to this problem in the works. It’s called “Server Name Indication” or SNI, and it’s part of the TLS protocol. Unfortunately it’s still not readily available (see this page). So, what’s the solution? It’s pretty simple actually, just use one certificate for all of your domains. There is an extension that allows for multiple domain names to be associated with a single certificate. When a browser receives such a certificate, it looks at all of the domain names and if any of them match it is satisfied. There seem to be some security issues with this feature (see this page). But since we are mainly interested in using it to authenticate ourselves with our own server, it’s not a big deal, I think. And that brings us to our second issue. We can pay Verisign to generate a certificate for us, or we can sign it ourself. If you pay Verisign (or other Certificate Authority) then anyone that browses your web site with a secure https connection will feel right at home, secure even. If you sign the certificate yourself, then viewers will be presented with a nastygram from their browser. Since I am mainly interested in being able to securely access my servers from insecure networks, I’m happy to sign the certificates myself.

First you need to enable SSL in your Apache configuration. In the Apache2 configuration for Ubuntu these files are located in /etc/apache2. You’ll need to make sure that your ports.conf file contains something like the following:

Listen 80

<IfModule mod_ssl.c>
    Listen 443
</IfModule>

This causes Apache to listen on port 443 for connections as well as port 80. Port 443 is the https port. And your conf.d/namevirtualhosts file should look something like:

NameVirtualHost *:80
NameVirtualHost *:443

This tells Apache to look up virtual hosts by name for traffic coming in on either port. Next you need to make sure that your conf.d/ssl_certificate file looks something like:

SSLCertificateFile    /etc/apache2/ssl/serverwide.crt
SSLCertificateKeyFile /etc/apache2/ssl/serverwide.key

You can see that I’ve called my certificate and key files serverwide to make it obvious that they are used by all domains served by this server. It is very important that these files have their permissions set so that only root can read them. You’ll also need to make symlinks from the ssl.conf and ssl.load files in your mods-available directory to your mods-enabled directory.

Now we’re ready to generate and sign our key and certificate. Once you have generated the certificate you can inspect it’s contents with this command.

openssl x509 -in serverwide.crt -noout -text

To generate a key use the following command. You will be asked for a pass phrase. It is important that you remember this pass phrase or your key will be lost to you forever. Or at least until computers are powerful enough to brute force crack your key.

openssl genrsa -des3 -rand /dev/urandom -out serverwide.key 1024

Once you have a key you can create and sign a certificate with the following command. You will be asked for the pass phrase you entered above. This is because the key is protected by that pass phrase and can’t be used without it. This certificate will be valid for one year.

openssl req -config server.config -new -key serverwide.key -out serverwide.crt -x509 -days 365

Most of the options we need to pass to OpenSSL to create and sign the certificate can be passed in a configuration file. The command line above assumes the configuration file is called server.config. Below is an example server.config file, the main lines of interest are in the alt_names section. The alt_names section is where you can put all of the virtually hosted domains on your server. The browser will look for a match with any of those domains when the server passes it the certificate we have just generated. I also found that subjectAltName had to be in both the v3_req and v3_ca sections.

[ req ]
default_bits       = 1024
default_md         = sha1
distinguished_name = req_distinguished_name
prompt             = no
string_mask        = nombstr
req_extensions     = v3_req
x509_extensions    = v3_ca

[ req_distinguished_name ]
countryName            = &lt;country code&gt;
stateOrProvinceName    = &lt;state&gt;
localityName           = &lt;city&gt;
organizationName       = &lt;whatever, could be your name&gt;
organizationalUnitName = &lt;again, whatever&gt;
commonName             = www.domain1.com
emailAddress           = webmaster@domain1.com

[ v3_req ]
basicConstraints       = CA:FALSE
keyUsage               = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName         = @alt_names

[ v3_ca ]
subjectAltName         = @alt_names

[ alt_names ]
DNS.1 = www.domain1.com
DNS.2 = www.domain2.com
DNS.3 = www.domain3.com

The key that we have generated will have a pass phrase associated with it (the one you entered when you generated the key). This pass phrase will need to be entered every time you restart your apache server. There are ways of removing this extra security, but if you want to do that I’ll let you look that one up elsewhere. For me with a server running on a UPS, I reboot or restart apache a couple of times a year at most. The added security is well woth it. If your server get’s compromised, and it will eventually, then your security key will be coppied. And that is as they say, a bad thing.

And finally, in your site configuration file, probably in /etc/apache2/sites-available, you will need to add the following:

<VirtualHost *:443>
    ServerName   www.domain1.com
    ServerAlias  domain1.com *.domain1.com
    DocumentRoot /var/www/domain1

    SSLEngine    On
</VirtualHost>

This is the Virtual host configuration file for your secured site. You’ll want to add any additional configurations to it from your normal *:80 configuration section.

Comment

It was a lot harder than I anticipated to get Movable Type to run from a single global install on all of my virtually hosted domains. So in the spirit of sharing, here’s how I did it.

First, install movable type into a directory at the root of your web servers directory structure. I put mine in /var/www/shared/. The resulting directory structure contained:

/var/www/shared/cgi-bin/mt
This directory contains pretty much everything in the Movable Type tarball.
/var/www/shared/mt-static
This directory is the mt-static directory from the Movable Type tarball. I moved it out to the top level of the shared directory because I have Apache configured to not serve documents from the cgi-bin directory.
/var/www/shared/conf
This directory will contain the configuration files for all of the sites you will be virtually hosting.

Next configure apache for each of your virtually hosted domains. I assume you already have virtual hosting up and running. The configuration file for socialhacker.com looks like this:

<VirtualHost *:80>
    ServerName   www.socialhacker.com
    ServerAlias  socialhacker.com *.socialhacker.com

    DocumentRoot /var/www/socialhacker

    AddHandler   cgi-script .cgi
    SetEnv       MT_HOME     /var/www/shared/cgi-bin/mt
    SetEnv       MT_CONFIG   /var/www/shared/conf/socialhacker.cgi
    ScriptAlias  /cgi-bin/   /var/www/shared/cgi-bin/
    Alias        /mt-static/ /var/www/shared/mt-static/
</VirtualHost>

The important bits here are how you set up the alias’ and the environment variables. And all of this is covered in this article.

Finally, you need to create a configuration file in the shared/conf directry for your site. I did this by hand, which you’ll need to do as well since Movable Type won’t set most of these options in the configuration file it generates.

CGIPath             /cgi-bin/mt/
StaticWebPath       /mt-static/
StaticFilePath      /var/www/shared/mt-static/
PluginPath          /var/www/shared/cgi-bin/mt/plugins/
TemplatePath        /var/www/shared/cgi-bin/mt/tmpl/
WeblogTemplatesPath /var/www/shared/cgi-bin/mt/default_templates/

ObjectDriver        DBI::mysql
Database            <your database name>
DBUser              <your database user>
DBPassword          <your database password>
DBHost              localhost

MailTransfer        smtp
SMTPServer          smtp.<your provider>.com _(probably)_

And there’s the real magic. You have to specify paths to the Static content as well as both Template directories, and the Plugin directory. If you forget the first template directory you won’t get very far as the administrative pages won’t load. But if you forget the second template path you’ll get all the way to making a post and find that all of the files in your blog directory are zero length. If you forget the Plugin path you’ll be missing all of your plugins.

Hopefully you’ve found this useful in setting up your own Movable Type blogs.

Comment