Sunday, June 19, 2016

How To Set Up Basic HTTP Authentication With Nginx on CentOS 7

Introduction

Nginx is one of the leading web servers in active use. It and its commercial edition, Nginx Plus, are developed by Nginx, Inc.

In this tutorial, you'll learn how to restrict access to an Nginx-powered website using the HTTP basic authentication method on Ubuntu 14.04. HTTP basic authentication is a simple username and (hashed) password authentication method.
Prerequisites



Step 1 — Installing HTTPD Tools

You'll need the htpassword command to configure the password that will restrict access to the target website. This command is part of the httpd-tools package, so the first step is to install that package.

    sudo yum install -y httpd-tools

Step 2 — Setting Up HTTP Basic Authentication Credentials

In this step, you'll create a password for the user running the website.

That password and the associated username will be stored in a file that you specify. The password will be encrypted and the name of the file can be anything you like. Here, we use the file /etc/nginx/.htpasswd and the username nginx.

To create the password, run the following command.

    sudo htpasswd -c /etc/nginx/.htpasswd nginx

You can check the contents of the newly-created file to see the username and hashed password.

    cat /etc/nginx/.htpasswd

Example /etc/nginx/.htpasswd

nginx:$apr1$ilgq7ZEO$OarDX15gjKAxuxzv0JTrO/

Step 3 — Updating the Nginx Configuration

Now that you've created the HTTP basic authentication credential, the next step is to update the Nginx configuration for the target website to use it.

HTTP basic authentication is made possible by the auth_basic and auth_basic_user_file directives. The value of auth_basic is any string, and will be displayed at the authentication prompt; the value of auth_basic_user_file is the path to the password file that was created in Step 2.

Both directives should be in the configuration file of the target website, which is normally located in the /etc/nginx/ directory. Open that file using nano or your favorite text editor.

    sudo nano /etc/nginx/nginx.conf

Under the server section, add both directives:
/etc/nginx/nginx.conf

. . .
server {
    listen       80 default_server;
    listen       [::]:80 default_server;
    server_name  _;
    root         /usr/share/nginx/html;

    auth_basic "Private Property";
    auth_basic_user_file /etc/nginx/.htpasswd;
. . .

Save and close the file.
Step 4 — Testing the Setup

To apply the changes, first reload Nginx.

    sudo systemctl reload nginx

Now try accessing the website you just secured by going to http://your_server_ip/ in your favorite browser. You should be presented with an authentication window (which says "Private Property", the string we set for auth_basic), and you will not be able to access the website until you enter the correct credentials. If you enter the username and password you set, you'll see the default Nginx home page.
Conclusion

You've just completed basic access restriction for an Nginx website. More information about this technique and other means of access restriction are available in Nginx's documentation.

How To Set Up Basic HTTP Authentication With Nginx on Ubuntu 14.04

Introduction

Nginx is one of the leading web servers in active use. It and its commercial edition, Nginx Plus, are developed by Nginx, Inc.

In this tutorial, you'll learn how to restrict access to an Nginx-powered website using the HTTP basic authentication method on Ubuntu 14.04. HTTP basic authentication is a simple username and (hashed) password authentication method.
Prerequisites

To complete this tutorial, you'll need the following:

    One Ubuntu 14.04 Droplet with a sudo non-root user, which you can set up by following this initial server setup tutorial.

    Nginx installed and configured on your server, which you can do by following this Nginx article.

Step 1 — Installing Apache Tools

You'll need the htpassword command to configure the password that will restrict access to the target website. This command is part of the apache2-utils package, so the first step is to install that package.

    sudo apt-get install apache2-utils

Step 2 — Setting Up HTTP Basic Authentication Credentials

In this step, you'll create a password for the user running the website.

That password and the associated username will be stored in a file that you specify. The password will be encrypted and the name of the file can be anything you like. Here, we use the file /etc/nginx/.htpasswd and the username nginx.

To create the password, run the following command. You'll need to authenticate, then specify and confirm a password.

    sudo htpasswd -c /etc/nginx/.htpasswd nginx

You can check the contents of the newly-created file to see the username and hashed password.

    cat /etc/nginx/.htpasswd

Example /etc/nginx/.htpasswd

nginx:$apr1$ilgq7ZEO$OarDX15gjKAxuxzv0JTrO/

Step 3 — Updating the Nginx Configuration

Now that you've created the HTTP basic authentication credential, the next step is to update the Nginx configuration for the target website to use it.

HTTP basic authentication is made possible by the auth_basic and auth_basic_user_file directives. The value of auth_basic is any string, and will be displayed at the authentication prompt; the value of auth_basic_user_file is the path to the password file that was created in Step 2.

Both directives should be in the configuration file of the target website, which is normally located in /etc/nginx/sites-available directory. Open that file using nano or your favorite text editor.

    sudo nano /etc/nginx/sites-available/default

Under the location section, add both directives:
/etc/nginx/sites-available/default.conf

. . .
server_name localhost;

location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ =404;
        # Uncomment to enable naxsi on this location
        # include /etc/nginx/naxsi.rules
        auth_basic "Private Property";
        auth_basic_user_file /etc/nginx/.htpasswd;
}
. . .

Save and close the file.
Step 4 — Testing the Setup

To apply the changes, first reload Nginx.

    sudo service nginx reload

Now try accessing the website you just secured by going to http://your_server_ip/ in your favorite browser. You should be presented with an authentication window (which says "Private Property", the string we set for auth_basic), and you will not be able to access the website until you enter the correct credentials. If you enter the username and password you set, you'll see the default Nginx home page.
Conclusion

You've just completed basic access restriction for an Nginx website. More information about this technique and other means of access restriction are available in Nginx's documentation.

How To Create a Self-Signed SSL Certificate for Apache in Ubuntu 16.04

Introduction

TLS, or transport layer security, and its predecessor SSL, which stands for secure sockets layer, are web protocols used to wrap normal traffic in a protected, encrypted wrapper.

Using this technology, servers can send traffic safely between the server and clients without the possibility of the messages being intercepted by outside parties. The certificate system also assists users in verifying the identity of the sites that they are connecting with.

In this guide, we will show you how to set up a self-signed SSL certificate for use with an Apache web server on an Ubuntu 16.04 server.

Note: A self-signed certificate will encrypt communication between your server and any clients. However, because it is not signed by any of the trusted certificate authorities included with web browsers, users cannot use the certificate to validate the identity of your server automatically.

A self-signed certificate may be appropriate if you do not have a domain name associated with your server and for instances where the encrypted web interface is not user-facing. If you do have a domain name, in many cases it is better to use a CA-signed certificate. You can find out how to set up a free trusted certificate with the Let's Encrypt project here.
Prerequisites

Before you begin, you should have a non-root user configured with sudo privileges. You can learn how to set up such a user account by following our initial server setup for Ubuntu 16.04.

You will also need to have the Apache web server installed. If you would like to install an entire LAMP (Linux, Apache, MySQL, PHP) stack on your server, you can follow our guide on setting up LAMP on Ubuntu 16.04. If you just want the Apache web server, skip the steps pertaining to PHP and MySQL in the guide.

When you have completed the prerequisites, continue below.
Step 1: Create the SSL Certificate

TLS/SSL works by using a combination of a public certificate and a private key. The SSL key is kept secret on the server. It is used to encrypt content sent to clients. The SSL certificate is publicly shared with anyone requesting the content. It can be used to decrypt the content signed by the associated SSL key.

We can create a self-signed key and certificate pair with OpenSSL in a single command:

    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt

You will be asked a series of questions. Before we go over that, let's take a look at what is happening in the command we are issuing:

    openssl: This is the basic command line tool for creating and managing OpenSSL certificates, keys, and other files.
    req: This subcommand specifies that we want to use X.509 certificate signing request (CSR) management. The "X.509" is a public key infrastructure standard that SSL and TLS adheres to for its key and certificate management. We want to create a new X.509 cert, so we are using this subcommand.
    -x509: This further modifies the previous subcommand by telling the utility that we want to make a self-signed certificate instead of generating a certificate signing request, as would normally happen.
    -nodes: This tells OpenSSL to skip the option to secure our certificate with a passphrase. We need Apache to be able to read the file, without user intervention, when the server starts up. A passphrase would prevent this from happening because we would have to enter it after every restart.
    -days 365: This option sets the length of time that the certificate will be considered valid. We set it for one year here.
    -newkey rsa:2048: This specifies that we want to generate a new certificate and a new key at the same time. We did not create the key that is required to sign the certificate in a previous step, so we need to create it along with the certificate. The rsa:2048 portion tells it to make an RSA key that is 2048 bits long.
    -keyout: This line tells OpenSSL where to place the generated private key file that we are creating.
    -out: This tells OpenSSL where to place the certificate that we are creating.

As we stated above, these options will create both a key file and a certificate. We will be asked a few questions about our server in order to embed the information correctly in the certificate.

Fill out the prompts appropriately. The most important line is the one that requests the Common Name (e.g. server FQDN or YOUR name). You need to enter the domain name associated with your server or, more likely, your server's public IP address.

The entirety of the prompts will look something like this:

Output
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.
Organizational Unit Name (eg, section) []:Ministry of Water Slides
Common Name (e.g. server FQDN or YOUR name) []:server_IP_address
Email Address []:admin@your_domain.com

Both of the files you created will be placed in the appropriate subdirectories of the /etc/ssl directory.

While we are using OpenSSL, we should also create a strong Diffie-Hellman group, which is used in negotiating Perfect Forward Secrecy with clients.

We can do this by typing:

    sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

This may take a few minutes, but when it's done you will have a strong DH group at /etc/ssl/certs/dhparam.pem that we can use in our configuration.
Step 2: Configure Apache to Use SSL

We have created our key and certificate files under the /etc/ssl directory. Now we just need to modify our Apache configuration to take advantage of these.

We will make a few adjustments to our configuration:

    We will create a configuration snippet to specify strong default SSL settings.
    We will modify the included SSL Apache Virtual Host file to point to our generated SSL certificates.
    (Recommended) We will modify the unencrypted Virtual Host file to automatically redirect requests to the encrypted Virtual Host.

When we are finished, we should have a secure SSL configuration.
Create an Apache Configuration Snippet with Strong Encryption Settings

First, we will create an Apache configuration snippet to define some SSL settings. This will set Apache up with a strong SSL cipher suite and enable some advanced features that will help keep our server secure. The parameters we will set can be used by any Virtual Hosts enabling SSL.

Create a new snippet in the /etc/apache2/conf-available directory. We will name the file ssl-params.conf to make its purpose clear:

    sudo nano /etc/apache2/conf-available/ssl-params.conf

To set up Apache SSL securely, we will be using the recommendations by Remy van Elst on the Cipherli.st site. This site is designed to provide easy-to-consume encryption settings for popular software. You can read more about his decisions regarding the Apache choices here.

The suggested settings on the site linked to above offer strong security. Sometimes, this comes at the cost of greater client compatibility. If you need to support older clients, there is an alternative list that can be accessed by clicking the link on the page labelled "Yes, give me a ciphersuite that works with legacy / old software." That list can be substituted for the items copied below.

The choice of which config you use will depend largely on what you need to support. They both will provide great security.

For our purposes, we can copy the provided settings in their entirety. We will also go ahead and set the SSLOpenSSLConfCmd DHParameters setting to point to the Diffie-Hellman file we generated earlier:
/etc/apache2/conf-available/ssl-params.conf

# from https://cipherli.st/
# and https://raymii.org/s/tutorials/Strong_SSL_Security_On_Apache2.html

SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder On
Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
Header always set X-Frame-Options DENY
Header always set X-Content-Type-Options nosniff
# Requires Apache >= 2.4
SSLCompression off
SSLSessionTickets Off
SSLUseStapling on
SSLStaplingCache "shmcb:logs/stapling-cache(150000)"

SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"

Save and close the file when you are finished.
Modify the Default Apache SSL Virtual Host File

Next, let's modify /etc/apache2/sites-available/default-ssl.conf, the default Apache SSL Virtual Host file. If you are using a different server block file, substitute it's name in the commands below.

Before we go any further, let's back up the original SSL Virtual Host file:

    sudo cp /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-available/default-ssl.conf.bak

Now, open the SSL Virtual Host file to make adjustments:

    sudo nano /etc/apache2/sites-available/default-ssl.conf

Inside, with most of the comments removed, the Virtual Host file should look something like this by default:
/etc/apache2/sites-available/default-ssl.conf

<IfModule mod_ssl.c>
        <VirtualHost _default_:443>
                ServerAdmin webmaster@localhost

                DocumentRoot /var/www/html

                ErrorLog ${APACHE_LOG_DIR}/error.log
                CustomLog ${APACHE_LOG_DIR}/access.log combined

                SSLEngine on

                SSLCertificateFile      /etc/ssl/certs/ssl-cert-snakeoil.pem
                SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

                <FilesMatch "\.(cgi|shtml|phtml|php)$">
                                SSLOptions +StdEnvVars
                </FilesMatch>
                <Directory /usr/lib/cgi-bin>
                                SSLOptions +StdEnvVars
                </Directory>

                # BrowserMatch "MSIE [2-6]" \
                #               nokeepalive ssl-unclean-shutdown \
                #               downgrade-1.0 force-response-1.0

        </VirtualHost>
</IfModule>

We will be making some minor adjustments to the file. We will set the normal things we'd want to adjust in a Virtual Host file (ServerAdmin email address, ServerName, etc.), adjust the SSL directives to point to our certificate and key files, and uncomment one section that provides compatibility for older browsers.

After making these changes, your server block should look similar to this:
/etc/apache2/sites-available/default-ssl.conf

<IfModule mod_ssl.c>
        <VirtualHost _default_:443>
                ServerAdmin your_email@example.com
                ServerName server_domain_or_IP

                DocumentRoot /var/www/html

                ErrorLog ${APACHE_LOG_DIR}/error.log
                CustomLog ${APACHE_LOG_DIR}/access.log combined

                SSLEngine on

                SSLCertificateFile      /etc/ssl/certs/apache-selfsigned.crt
                SSLCertificateKeyFile /etc/ssl/private/apache-selfsigned.key

                <FilesMatch "\.(cgi|shtml|phtml|php)$">
                                SSLOptions +StdEnvVars
                </FilesMatch>
                <Directory /usr/lib/cgi-bin>
                                SSLOptions +StdEnvVars
                </Directory>

                BrowserMatch "MSIE [2-6]" \
                               nokeepalive ssl-unclean-shutdown \
                               downgrade-1.0 force-response-1.0

        </VirtualHost>
</IfModule>

Save and close the file when you are finished.
(Recommended) Modify the Unencrypted Virtual Host File to Redirect to HTTPS

As it stands now, the server will provide both unencrypted HTTP and encrypted HTTPS traffic. For better security, it is recommended in most cases to redirect HTTP to HTTPS automatically. If you do not want or need this functionality, you can safely skip this section.

To adjust the unencrypted Virtual Host file to redirect all traffic to be SSL encrypted, we can open the /etc/apache2/sites-available/000-default.conf file:

    sudo nano /etc/apache2/sites-available/000-default.conf

Inside, within the VirtualHost configuration blocks, we just need to add a Redirect directive, pointing all traffic to the SSL version of the site:
/etc/apache2/sites-available/000-default.conf

<VirtualHost *:80>
        . . .

        Redirect "/" "https://your_domain_or_IP"

        . . .
</VirtualHost>

Save and close the file when you are finished.
Step 3: Adjust the Firewall

If you have the ufw firewall enabled, as recommended by the prerequisite guides, might need to adjust the settings to allow for SSL traffic. Luckily, Apache registers a few profiles with ufw upon installation.

We can see the available profiles by typing:

    sudo ufw app list

You should see a list like this:

Output
Available applications:
  Apache
  Apache Full
  Apache Secure
  OpenSSH

You can see the current setting by typing:

    sudo ufw status

If you allowed only regular HTTP traffic earlier, your output might look like this:

Output
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Apache                     ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Apache (v6)                ALLOW       Anywhere (v6)

To additionally let in HTTPS traffic, we can allow the "Apache Full" profile and then delete the redundant "Apache" profile allowance:

    sudo ufw allow 'Apache Full'
    sudo ufw delete allow 'Apache'

Your status should look like this now:

    sudo ufw status

Output
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Apache Full                ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Apache Full (v6)           ALLOW       Anywhere (v6)

Step 4: Enable the Changes in Apache

Now that we've made our changes and adjusted our firewall, we can enable the SSL and headers modules in Apache, enable our SSL-ready Virtual Host, and restart Apache.

We can enable mod_ssl, the Apache SSL module, and mod_headers, needed by some of the settings in our SSL snippet, with the a2enmod command:

    sudo a2enmod ssl
    sudo a2enmod headers

Next, we can enable our SSL Virtual Host with the a2ensite command:

    sudo a2ensite default-ssl

We will also need to enable our ssl-params.conf file, to read in the values we set:

    sudo a2enconf ssl-params

At this point, our site and the necessary modules are enabled. We should check to make sure that there are no syntax errors in our files. We can do this by typing:

    sudo apache2ctl configtest

If everything is successful, you will get a result that looks like this:

Output
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
Syntax OK

The first line is just a message telling you that the ServerName directive is not set globally. If you want to get rid of that message, you can set ServerName to your server's domain name or IP address in /etc/apache2/apache2.conf. This is optional as the message will do no harm.

If your output has Syntax OK in it, your configuration file has no syntax errors. We can safely restart Apache to implement our changes:

    sudo systemctl restart apache2

Step 5: Test Encryption

Now, we're ready to test our SSL server.

Open your web browser and type https:// followed by your server's domain name or IP into the address bar:

https://server_domain_or_IP

Because the certificate we created isn't signed by one of your browser's trusted certificate authorities, you will likely see a scary looking warning like the one below:

Apache self-signed cert warning

This is expected and normal. We are only interested in the encryption aspect of our certificate, not the third party validation of our host's authenticity. Click "ADVANCED" and then the link provided to proceed to your host anyways:

Apache self-signed override

You should be taken to your site. If you look in the browser address bar, you will see a lock with an "x" over it. In this case, this just means that the certificate cannot be validated. It is still encrypting your connection.

If you configured Apache to redirect HTTP to HTTPS, you can also check whether the redirect functions correctly:

http://server_domain_or_IP

If this results in the same icon, this means that your redirect worked correctly.
Step 6: Change to a Permanent Redirect

If your redirect worked correctly and you are sure you want to allow only encrypted traffic, you should modify the unencrypted Apache Virtual Host again to make the redirect permanent.

Open your server block configuration file again:

    sudo nano /etc/apache2/sites-available/000-default.conf

Find the Redirect line we added earlier. Add permanent to that line, which changes the redirect from a 302 temporary redirect to a 301 permanent redirect:
/etc/apache2/sites-available/000-default.conf

<VirtualHost *:80>
        . . .

        Redirect permanent "/" "https://your_domain_or_IP"

        . . .
</VirtualHost>

Save and close the file.

Check your configuration for syntax errors:

    sudo apache2ctl configtest

When you're ready, restart Apache to make the redirect permanent:

    sudo systemctl restart apache2

Conclusion

You have configured your Apache server to use strong encryption for client connections. This will allow you serve requests securely, and will prevent outside parties from reading your traffic.

Friday, June 17, 2016

How to Install HAProxy Load Balancer on CentOS

Installing HAProxy 1.6

As a fast developing opensource application HAProxy available for install in the CentOS default repositories might not be of the latest release. To find out what version number is being offered through the official channels enter the following command
sudo yum info haproxy
HAProxy has always three active stable versions of the releases, two of the latest versions in development plus a third older version that is still receiving critical updates. You can always check the currently newest stable version listed on HAProxy website and then decide which version you wish to go with.
In this guide we’ll be installing the currently latest stable version of 1.6, which was not yet available in the standard repositories. Instead you’ll need to install it from the source, but before this check that you have the prerequisites to download and compile the program.
sudo yum install wget gcc pcre-static pcre-devel -y
Download the source code with the command below. You can check if there’s a newer version available at the HAProxy download page and then replace the download link in the wget command with the latest.
wget http://www.haproxy.org/download/1.6/src/haproxy-1.6.3.tar.gz -O ~/haproxy.tar.gz
Once the download is complete, extract the files using the following
tar xzvf ~/haproxy.tar.gz -C ~/
Change into the directory.
cd ~/haproxy-1.6.3
Then compile the program for your system.
make TARGET=linux2628
And finally install HAProxy itself.
sudo make install
To complete the install, use the following commands to copy the settings over.
sudo cp /usr/local/sbin/haproxy /usr/sbin/
sudo cp ~/haproxy-1.6.3/examples/haproxy.init /etc/init.d/haproxy
sudo chmod 755 /etc/init.d/haproxy
Create these directories and the statistics file for HAProxy to record in.
sudo mkdir -p /etc/haproxy
sudo mkdir -p /run/haproxy
sudo mkdir -p /var/lib/haproxy
sudo touch /var/lib/haproxy/stats
Then add a new user for HAProxy.
sudo useradd -r haproxy
After the installation you can double check the installed version number with the following
sudo haproxy -v
HA-Proxy version 1.6.3 2015/12/25
Copyright 2000-2015 Willy Tarreau <willy@haproxy.org>
In this case the version is 1.6.3 like shown in the example output above.

Configuring the load balancer

Setting up HAProxy for load balancing is a quite straight forward process. Basically all you need to do is tell HAProxy what kind of connections it should be listening for and which servers it should relay the connections to. This is done by creating a configuration file /etc/haproxy/haproxy.cfg with the defining settings. You can read about the configuration options at HAProxy documentation if you wish to find out more.
Open a .cfg file for edit for example using vi with the following command
sudo vi /etc/haproxy/haproxy.cfg
Add the following sections to the the file. Replace the <server name> with what ever you want to call you servers on the statistics page and the <private IP> with the private IPs for the servers you wish to direct the web traffic to. You can check the private IPs at your UpCloud Control Panel and Private network -tab under Network -menu.
global
   log /dev/log local0
   log /dev/log local1 notice
   chroot /var/lib/haproxy
   stats socket /run/haproxy/admin.sock mode 660 level admin
   stats timeout 30s
   user haproxy
   group haproxy
   daemon

defaults
   log global
   mode http
   option httplog
   option dontlognull
   timeout connect 5000
   timeout client 50000
   timeout server 50000

frontend http_front
   bind *:80
   stats uri /haproxy?stats
   default_backend http_back

backend http_back
   balance roundrobin
   server <server name> <private IP>:80 check
   server <server name> <private IP>:80 check
This defines a layer 4 load balancer with a front-end name http_front listening to the port number 80, which then directs the traffic to the default back-end name http_back. The additional stats uri /haproxy?stats enables the statistics page at that specified address. Configuring the servers in the back-end section allows HAProxy to use these servers for load balancing whenever available according to the roundrobin algorithm.
The balancing algorithms are used to decide which server at the back-end each connection is transferred to. Some of the useful options include the following:
  • Roundrobin: Each server is used in turns according to their weights. This is the smoothest and fairest algorithm when the servers’ processing time remains equally distributed. This algorithm is dynamic, which allows server weights to be adjusted on the fly.
  • Leastconn: The server with the lowest number of connections is chosen. Round-robin is performed between servers with the same load. Using this algorithm is recommended with long sessions, such as LDAP, SQL, TSE, etc, but it’s not very well suited for short sessions such as HTTP.
  • First: The first server with available connection slots receives the connection. The servers are chosen from the lowest numeric identifier to the highest, which defaults to the server’s position in the farm. Once a server reaches its maxconn value, the next server is used.
  • Source: The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This way the same client IP address will always reach the same server while the servers stay the same.
An other possibility is to configure the load balancer to work on layer 7, this can be useful when parts of your web application are located on different hosts. This can be accomplished by conditioning the connection transfer for example by the URL.
  • frontend http_front
       bind *:80
       stats uri /haproxy?stats
       acl url_blog path_beg /blog
       use_backend blog_back if url_blog
       default_backend http_back
    
    backend http_back
       balance roundrobin
       server <server name> <private IP>:80 check
       server <server name> <private IP>:80 check
    
    backend blog_back
       server <server name> <private IP>:80 check
The front-end declares an ACL -rule named url_blog that applies to all connections which path begins with /blog, and use_backend defines that connections matching the url_blog condition should be served by the back-end named blog_back.
At the back-end side the configuration sets up two server groups, http_back like before and the new one called blog_back that servers specifically connections to domain.com/blog.
After making the configurations, save the file and restart HAProxy with the following
sudo systemctl restart haproxy
If you get any errors or warnings at start up, check the configuration for any mistypes and that you’ve created all the necessary files and folders, then try restarting again.

Testing the setup

With the HAProxy configured and running, open your load balancer server’s public IP in a web browser and check that you get connected to your back-end correctly. The parameter stats uri in the configuration enables the statistics page at the defined address.
http://<load balancer public IP>/haproxy?stats
When you load the statistics page and all of your servers are listed in green your configuration was successful!
In case your load balancer does not reply, check that HTTP connections are not getting blocked by the firewall. Since you most likely deployed a fresh install of CentOS 7 for this project, the host is rather restrictive by default. You can use the following commands to add these rules and to restart the firewall.
sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-port=8181/tcp
sudo firewall-cmd --reload
The statistics page contains some helpful information to keep track of your web hosts including up- and downtimes and session counts. If a server is listed in red, check that the server is powered on and that you can ping it from the load balancer machine.

Installing HAProxy on Ubuntu 14.04

HAProxy is a network software application that offers high availability, load balancing, and proxying for TCP and HTTP network applications. It is suited for high traffic and powers many websites. This article will show you how to install and setup HAProxy on Ubuntu 14.04.
Although HAProxy has several prominent features, this article focuses on how to setup HAProxy to "proxy" your web application.

Installing HAProxy

Since Ubuntu 14.04 does not ship with HAProxy 1.5 (latest stable release at time of writing), we will have to use a PPA to be able to install it using apt-get:
add-apt-repository ppa:vbernat/haproxy-1.5
Next, update the system:
apt-get update
apt-get dist-upgrade
Now install HAProxy with the following command:
apt-get install haproxy
If everything is successful, then you have finished installing HAProxy and can proceed to the next step.

Configuring HAProxy

The HA Proxy configuration file is split up into two sections – "global", and "proxies". One deals with process-wide configuration, while the other consists of default configuration, frontend, and backend sections.

Global Section

With your favorite text editor, open /etc/haproxy/haproxy.cfg and you will notice the predefined sections: "global" and "defaults". The first thing that you may want to do is increase the maxconn to a reasonable size, as this affects the connections that HAProxy allows. Too many connections may cause your web service to crash due to many requests. You will need to adjust the size to see what works for you. In the global section, add or change maxconn to 3072.
In the default section, add the following lines under mode http:
option forwardfor
option http-server-close
This will add X-Forwarded-For headers to each request, as well as reduce the latency between HAProxy and to preserve client persistent connections.

Proxies Section

Frontend and Backend
Commonly, the first thing when it comes to using is to setup a frontend to handle HTTP connections. Add the following:
frontend http-frontend
    bind public_ip:80
    reqadd X-Forwarded-Proto:\ http
    default_backend wwwbackend
Note: Be sure to replace public_ip with your domain or your public ip. Otherwise, this entire setup will not work.
After you have finished configuring the frontend, you can now add your backend by adding the following lines to the end of your configuration:
backend wwwbackend
    server 1-www private_ip_1:80 check
    server 2-www private_ip_2:80 check
    server 3-www private_ip_3:80 check
The backend configuration used here creates a connection named X-www to private_ip_X:80 (Replace X with 1 – 3. and replace private_ip_X with your private or public ip). This will allow you to load balance between each server set (assuming you have more than one server). The check option makes the load balancer perform health checks on the server.
When you are done, save the configuration file, then restart HAProxy by running:
service haproxy restart
If everything is working, then you will be able to connect to http://public_ip/ (replacing it with your Vultr VPS IP) and view your website.

77 useful Linux commands and utilities

How to use the alias command in Linux.
apt-get
Apt-get is a tool to automatically update a Debian machine and get and install Debian packages/programs.
How to manage software on Ubuntu Server with "aptitude" and "apt-get".
Understanding the Debian archives and apt-get.
Inside the Red Hat and Debian package management differences.

Aspell
GNU Aspell is a free and open source spell checker designed to replace Ispell. It can either be used as a library or as an independent spell checker.
How to use Aspell to check spelling.
AWK, Gawk
A programming-language tool used to manipulate text. The language of the AWK utility resembles the shell-programming language in many areas, although AWK's syntax is very much its own.
Learn how to use the AWK utility.

Gawk is the GNU Project's version of the AWK programming language.

bzip2
A portable, fast open source program used to compress and decompress files at high rates.
How to use bzip2 in Linux.
More on how to use the bzip2 compression program.

cat
A Unix/Linux command that can read, modify or concatenate text files, most commonly used for displaying contents of files.
See how to use cat to display contents of a file in Linux.
An article on what you can do with the cat command.
cd
The cd command changes the current directory in Linux, and can toggle between directories conveniently. It is similar to the CD and CHDIR commands in MS-DOS.
See more on how to use the cd command to change directories.
chmod
Chmod changes the access mode (permissions) of one or more files. Only the owner of a file or a privileged user may change the mode.
See examples of changing the permissions of files using chmod.
chown
Chown changes file or group ownership, and has options to change ownership of all objects within a directory tree, and view information on objects processed.
Learn how to change file ownership with chown.
cmp
The cmp utility compares two files of any type and writes the results to the standard output. By default, cmp is silent if the files are the same; if they differ, the byte and line number at which the first difference occurred is reported.
See IBM's examples for using cmp.
comm
Comm compares lines common to the sorted files file1 and file2. Output is in three columns, from left to right: lines unique to file1, lines unique to file2, and lines common to both files.
More on comparing lines with comm.
Read a brief tutorial on using comm.
cp
The cp command copies files and directories, and copies can be made simultaneously to another directory if the copy is under a different name.
Find out how to copy Linux files and directories with the cp command.
cpio
Cpio copies files into or out of a cpio or tar archive, which is a file that contains other files plus information about them, such as their file name, owner, timestamps, and access permissions. The archive can be another file on the disk, a magnetic tape, or a pipe. Cpio has three operating modes, and is a more efficient alternative to tar.
Learn how to use cpio when moving files in a Unix-to-Linux port.
See how to back up files with cpio.
CRON
CRON is a Linux system process that will execute a program at a preset time. To use CRON, a user must prepare a text file that describes the program to be executed and the times that CRON should execute them. Then, the crontab program is used to load the text file that describes the CRON jobs into CRON.
Using CRON to execute programs at specific times.

date
Date sets a system's date and time. Also a useful way to output/print current information when working in a script file.
A few more examples from IBM on setting date and time with date.
declare
Declare declares variables, gives them attributes, or modifies properties of variables.
Examples of declaring variables with declare.
df
Df displays the amount of disk space available on the file system containing each file name argument. With no file name, available space on all currently mounted file systems is shown.
More on using df to display the amount of disk space available.

echo
Echo allows a user to repeat, or "echo," a string variable to standard output.
More on using the Echo command with shell scripts.
enable
Enable will stop or start printers or classes.
Examples of how to enable LP printers.
env
Env runs a program in a modified environment, or displays the current environment and its variables.
Examples of changing environment variables using env.
eval
Eval evaluates several arguments and concatenates them into a single command, and then reports on that argument's status.
More on concatenating arguments with eval.
exec
Exec replaces the parent process by whatever command is typed. This command treats its arguments as the specification of one or more sub processes to execute.
More examples of replacing parent processes with exec.
exit
The exit command terminates a script, and can return a value to the parent script.
More on terminating scripts with exit.
expect
Expect talks to other interactive programs according to a script, and waits for a response, often from any string that matches a given pattern.
Using expect for responses.
export
Export converts files into another format than the one it is currently in. Once a file is exported, it can be accessed by any application that uses the format.
Examples of exporting data from a database with export.

find
Find searches the directory tree to find particular groups of files that meet specified conditions, including --name and --type, -exec and --size, and --mtime and --user.
Efficiently locating files with find.
for, while
For and while are used to execute or loop items repeatedly as long as conditions are met.
More on looping items with the for command.
More on looping items with the while command.
free
Free displays the total amount of free and used physical memory and swap space in the system, as well as the buffers and cache used by the kernel.
Learn how to use the free command to optimize a computer's memory.

gawk
See AWK.
grep
Grep searches file(s) for a given character string or pattern, and can replace the string with another one. One method of searching for files within Linux.
Examples of searching with grep.
gzip
Gzip is the GNU project's open source program used for file compression, compressing web pages on the server end for decompression in the browser. Popular for streaming media compression, and can concatenate and compress several streams simultaneously.
Examples of using gzip for compressing files.

ifconfig
Ifconfig is used to configure the kernel-resident network interfaces. It is used at boot time to set up interfaces. After that, it is usually only needed when debugging or when system tuning is needed.
Examples of using iconfig to configure a network.
Using ifconfig to detect Linux network configuration problems.
ifup
Ifup configures a network interface/enables a network connection.
More on the ifup command in configuring network interfaces.
ifdown
Ifdown shuts down a network interface/disables a network connection.
More on shutting down networks with ifdown.

less, more
The less command lets an admin scroll through configuration and error log files, displaying text files one screen at a time, with backward or forward moving available in files. More mobility within files than in more.
View several different file types with less.

Similar to less, more pages through text one screen at a time, but is more limited in moving in files.
See a few examples of displaying files with more.
locate, slocate
Locate reads one or more databases and writes file names matching patterns to output.
Finding files/directories efficiently with locate.

Like locate, slocate, or secure locate, provides a way to index and quickly search for files, but also securely stores file permissions and ownership so unauthorized users will be unable to view such files.
See an example of using slocate as a quick secure way to index files.
lft
Lft is similar to traceroute in determining connection routes, but gives a lot more information for debugging connections or finding where a box/system is. It displays route packets and file types.
More on displaying route packets with lft.
ln
The ln command creates new names for a file by hard linking, letting multiple users share one file.
Examples of hard linking files with ln.
A few more examples of using ln.
ls
The ls command lists files and directories within the current working directory, and admins can determine when configuration files were last edited.
The ls command is also discussed in this tip.
Examples of listing files and directories with ls.

man
Short for "manual," man allows a user to format and display the user manual built into Linux distributions, which documents commands and other aspects of the system.
The man command is also discussed in this tip.
See how to use the man command.
See examples of formatting man pages.
mc
A visual shell, text-based file manager for Unix systems.
An extensive guide to managing files with mc.
more
See less.

neat
Neat is a GNOME GUI admin tool which allows admins to specify information needed to set up a network card, among other features.
Setting up an NTL Cable Modem using neat.
Where neat falls in when building a network between Unix and Linux systems.
netconfig, netcfg
Netconfig configures a network, enables network products and displays a series of screens that ask for configuration information.
Configuring networks using Red Hat netcfg.
netstat
Netstat provides information and statistics about protocols in use and current TCP/IP network connections. A helpful forensic tool in figuring out which processes and programs are active on a computer and involved in networked communications.
More on checking network statuses with the netstat command.
nslookup
Nslookup allows a user to enter a host name and find the corresponding IP address. A reverse of the process to find the host name is also possible.
More from Microsoft on how to find IP addresses with nslookup.

od
Od is used to dump binary files in octal (or hex, binary) format to standard output.
Examples of dumping files with od.
More on od from IBM.

passwd
Passwd updates a user's authentication tokens (changes the current password).
Some IBM examples on changing passwords with passwd.
ping
Ping allows a user to verify that a particular IP address exists and can accept requests. Can be used to test connectivity and determine response time, and ensure that a host computer the user is trying to reach is actually operating.
Examples from IBM of using ping to verify IP addresses.
ps
Ps reports statuses of current processes in a system.
Some examples of using the ps command.
pwd
The pwd (print working directory) command displays the name of the current working directory. A basic Linux command.
Learn the differences between $ PATH and pwd.
Using pwd to print the current working directory.

read
Read is used to read lines of text from standard input and assign values of each field in the input line to shell variables for further processing.
Examples from IBM on using read.
RPM
Red Hat Package Manager (RPM) is a command-line driven program capable of installing, uninstalling and managing software packages in Linux.
A white paper on using RPM.
The Differences of yum and RPM.
Examples of installing packages with RPM.
rsync
Rsync synchs data from one disk or file to another across a network connection. Similar to rcp, but has more options.
A tip on backing up data with rsync.
How to use rsync to back up a directory in Linux.

screen
The GNU screen utility is a terminal multiplexor in which a user can use a single terminal window to run multiple terminal applications or windows.
A tutorial on running multiple windows and other uses of screen.
A tip on the uses of screen.
sdiff
Sdiff finds differences between two files by producing a side-by-side listing indicating lines that are different. It then merges the files and outputs results to outfile.
Example of contrasting files with sdiff.
More examples from IBM on the sdiff command.
sed
Sed is a stream editor that is used to filter text in a pipeline, distinguishing it from other editors. Sed takes text input and performs operation(s) on it and outputs the modified text. Typically used for extracting part of a file using pattern matching or substituting multiple occurrences of a string within a file.
More on extracting and replacing parts of a file with sed.
Several more examples from IBM on using sed for filtering.
shutdown
Shutdown is a command that turns off the computer and can be combined with variables such as -h for halt after shutdown or -r for reboot after shutdown.
Shut down or halt a computer with shutdown.
slocate
See locate.
Snort
Snort is an open source network intrusion detection system and packet sniffer that monitors network traffic, looking at each packet to detect dangerous payloads or suspicious anomalies. Based on libpcap.
Stopping hackers with Snort.
More from Red Hat on using Snort.
sort
Used to sort lines of text alphabetically or numerically according to fields; supports multiple sort keys.
Examples of sorting through lines of text with the sort command.
More examples of sort with multiple sort keys.
sudo
Sudo allows a system admin to give certain users the ability to run some (or all) commands at the root level, and logs all commands and arguments.
A tutorial on giving permissions to users with the sudo command.
SSH
SSH is a command interface used for securely gaining access to a remote computer, and is used by network admins to control servers remotely.
A comprehensive tutorial on secure access to remote computers with SSH.

tar
The tar program provides the ability to create archives from a number of specified files, or extract files from such an archive.
Examples of creating archives with tar.
TOP
TOP is a set of protocols for networks that perform distributed information processing in offices, and it displays the tasks on the system that take up the most memory. TOP can sort tasks by CPU usage, memory usage and runtime.
Monitoring system processes with TOP.
tr
Used to translate or delete characters from a text stream, writing to standard output, but does not accept file names as arguments -- only inputs from standard input.
Examples from IBM of translating characters with tr.
traceroute
Traceroute determines and records a route through the Internet between two computers and is useful for troubleshooting network/router issues. If the domain does not work or is not available, an IP can be tracerouted.
A tutorial on using traceroute to determine network issues.

uname
Uname displays the name current operating system, and can print information about the mentioned system.
Examples of viewing information on the current operating system with uname.
uniq
Uniq compares adjacent lines in a file, and removes/reports any duplicated lines. Removing duplicate lines with the uniq command.
A tip from IBM on removing redundant lines with uniq.

vi
Vi is a text editor that allows a user to control the system by solely using the keyboard instead of a combination of mouse selections and keystrokes.
An entire guide to using vi to easily control a system with the keyboard.
vmstat
Vmstat is used to get a snapshot of everything in a system, reporting information on such items as processes, memory, paging, and cpu activity. A good method for admins in determining where issues/slowdown in a system may be occurring.
How to keep an eye on Linux performance with vmstat and others.
Examples of viewing system memory usage with vmstat.

wc
wc counts the number of words, lines and characters of text files, and produces a count for multiple files if several files are selected.
More from IBM on displaying word counts with wc.
wget
Wget is a network utility that retrieves files from the Web supporting http, https and ftp protocols. It works non-interactively, in the background, while a user is logged off. Can create local versions of remote websites, re-creating directories of original sites.
Examples of creating mirror images of sites with wget.

Easier Methods to Get PerfectTech Resume for Entrepreneurs

“A powerful resume should leap off the page saying, ‘Me! I’m the one you want to hire!’” advises software engineer Gayle Laakmann McDowell in her book The Google Resume: How to Prepare for a Career and Land a Job at Apple, Microsoft, Google, or Any Top Tech Company. She says that every line in these documents should have value and contribute to convincing the employer to hire you. That said, below are 15 tips from McDowell and others on creating the perfect tech resume.
1. Focus on accomplishments: Focus less on your job duties in your last job and more on what you actually accomplished, with an emphasis on tangible results (increased app sales revenues by 20 percent, developed software that reduced costs by 10 percent, etc.).

2. Quantify results: Avoid saying general things like “improved customer satisfaction,” “increased company profits,” or “reduced number of bugs.” Instead, provide quantifiable metrics that demonstrate how your work helped your company save money, reduce costs, improve customer service, etc.
3. Target your resume: Gone are the days of sending one generic resume to hundreds of companies. You should target each resume to the specific job listing and company.
4. Don’t get too technical: Technical terms, sales and marketing slang, and acronyms that are commonly used at one company may be like a foreign language to recruiters or hiring managers at other companies. Make your resume universally understood by using industry-recognized terminology and explaining anything that recruiters might find confusing.
5. Be concise: We’ve all heard the stats about hiring managers tossing resumes that have just one typo. Although tech companies tend to be more forgiving, that’s no reason to submit a grammatically incorrect, misspelled, and otherwise poorly presented resume.
6. Be clear, and structure your resume well: Try to think like a recruiter when creating your resume. Provide the information recruiters want so that they don’t throw your resume in the trash pile. For example, if you worked as a software engineer at a top company such as Microsoft or Intel, stress the company name rather than your job title, since that will impress the recruiter the most.
7. Ditch the “objective”:  Use an Objective in your resume only if you are straight out of college or want to bring attention to the fact that you want to transition to a new role (for example, moving from a position in software engineering to one in sales). An Objective can also be a drawback because your stated job interest (mobile software developer) might convince the recruiter that you’re not interested in other lucrative and rewarding positions (user interface engineer, Web developer, etc.) he or she needs to fill.

8. Don’t be vague in your “summary”:  If you use a Summary section, be sure that it’s filled with key accomplishments (backed up by hard numbers), not vague pronouncements about your detail-oriented personality, strong work ethic, etc. Some people rename this section “Summary and Key Accomplishments.”

9. Think accomplishments over duties: Work experience is a key component of your resume, but it should not feature a comprehensive list of all the jobs that you’ve held (especially if you’ve worked in the industry for years or had many jobs). List the most important positions that will show the hiring manager that you’re qualified for the new job. Provide the largest amount of detail for your current or most recent job (or the one that is most applicable to showing that you’re qualified for the new position). Be sure to list your accomplishments, rather than just job duties. Again, think about what the hiring manager wants to see to convince him or her to call you in for an interview.

10. Minimize your “education” as you gain experience: Professional experience matters more than education in the tech industry, but it’s important that the Education section effectively conveys your educational background. If you have a nontraditional degree that recruiters may not be familiar with, be sure to offer a one- or two-sentence description of the major. Recent graduates should list their GPA only if it’s at least 3.0 on a 4.0 scale (of course, omitting your GPA may raise a red flag with the recruiter). Recent graduates should also list any college activities or awards that they believe will help them land the job, but they shouldn’t list everything they did while in school. Finally, the rule of thumb is that the Education section shrinks as you gain experience. Eventually, it will simply list the bare essentials such as university name, location, dates attended, degree earned, etc.
11. Don’t forget the skills: Tech workers should be sure to include a Skills section on their resume. This section should list software expertise, programming languages, foreign languages, and other applicable skills, but it’s a good idea to skip basic skills (such as Microsoft Word) that many applicants have. The key is to list skills that will help you land the job.

12. Go big, and keep the little for later: When considering what to include on your resume, focus on the “big,” and save the “little” for the job interview. This means you should detail big, eye-catching accomplishments such as new products and technologies that you helped develop, major employers (such as Google or Amazon) that you worked for, major customers that you interacted with, and increases in sales, profits, or productivity that you contributed to. Be ready to provide the details regarding these accomplishments and background information during the actual interview.
13. Use keywords: At its employment web site, Microsoft advises applicants to detail on their resume how their experiences (leadership roles, work duties, school activities, etc.) helped them to grow as a person and as a professional. This is a good approach, since you always want to show that you are evolving as a person and eager to learn new skills. Also, use keywords that match those listed in the job announcement. For example, if you’re applying for a position in e-marketing and search engine optimization, then your resume should include these terms. This will help you get noticed by resume-scanning software and advance past the first screening stage.
14. Use your name: If you send your resume as an attachment, don’t name it “resume.doc” or “resume.pdf.” That’s the surest way for your resume to get lost among the thousands of other submissions. Instead, name the file starting with your last name, then your first name, then the date. And add the job identification number if one is available.
15. Use tools and follow the directions: Some companies such as Microsoft offer resume-building tools for job applicants at their web sites. These tools will help you determine what you should and should not include in your resume. Be sure to use these tools, if offered. And follow instructions to the letter. Google, for example, requires applicants to submit their resumes in PDF, Microsoft Word, or text formats. It also requires that all application materials for U.S. jobs be submitted in English.

Cross-Browser Compatibility and HTML & CSS Validation

Firefox and Seamonkey

It's possible for different versions of Firefox and Seamonkey to all co-exist on the same machine.
If you did not already know, Mozilla Firefox and Seamonkey use the same Gecko rendering engine. As such, if you have one of these browsers, you probably don't need to install the other to test your site.
It is easy to make multiple versions of Firefox and Seamonkey co-exist with each other. Install them into separate directories and create a different profile for each browser you install. (For non-Firefox users, this browser allows you to create different profiles so that you can store different settings for different situations.)
To create a different profile for Firefox, simply start Firefox with the following command line:
"c:\Program Files (x86)\Mozilla Firefox\firefox" -ProfileManager
Once you've finished creating profiles, you will want to create shortcuts (Windows terminology) to run the different versions of the browser. This makes life easier for you: you can simply click the appropriate icon for the different versions, and it will load using the correct profile. To specify which profile the browser is to load, put the profile name after the "-P" option.
For example, if you have created a profile named "currentfirefox", your command for running the current version of Firefox with that profile may look like:
"C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -P currentfirefox
Similarly, your command to run the Firefox with a profile called "oldversion" may look like:
"c:\Program Files (x86)\Mozilla Firefox\firefox" -P oldversion
And so on.
I'm not sure that you really need all the different implementations of the Gecko engine to test, though. I personally only test my sites with latest version of Firefox since my site design tends to be simple.

Chrome, Vivaldi, Opera and Safari

Google's Chrome browser, the Vivaldi browser and the current version of Opera all use the same engine. In general, you can expect that the vast majority of people who uses the Chrome browser will be using the latest version, since that browser automatically updates itself whether you want it or not. As such, I tend not to bother to test my sites with earlier versions of Chrome.
You can get Chrome from Google's Chrome site and Vivaldi from Vivaldi.com. Since these browsers use the same engine, if a site works with one browser, it should probably work with the other.
In addition, the Safari web browser share a lot of code in common with both Chome, Vivaldi and Opera, since all four ultimately derive their engine from yet another browser called Konqueror. This similarity will diverge over time, since the engine for Safari is being developed separately from the other three. If you are feeling lazy, you can probably get away with testing under any one of the four for now, although if you really want to be thorough, you probably should install Safari in addition to one of the other three. All four browsers can coexist with each other on the same computer.

Internet Explorer

For most sites, IE users probably comprise the majority of visitors, despite the inroads made by the other web browsers. Now that Microsoft has made Internet Explorer automatically update to the latest version (via Windows Update), chances are that more and more of your visitors will be using the latest version.
Unfortunately, in spite of this, there are still a few users sitting on old versions of the browser. For example, IE 6 is still being used by some people running Windows XP. Although this number is dwindling rapidly, at the time I write this, there are still enough visitors using it for some websites that webmasters feel obliged to continue to support it. (The actual percentage varies from site to site, depending on the target audience of each site.)
My experience in coding thesitewizard.com and thefreecountry.com, both of which depend heavily upon Cascading Style Sheets ("CSS") for layout, is that IE 6 and 7 are very different animals from the other browsers or even the later incarnations of IE. Contrary to what you may expect, what works in IE 11, Vivaldi, Firefox and Safari will not necessarily work in IE 6 and 7. IE 6 has numerous bugs in its engine, causing sites that are correctly coded to break under that browser. In other words, if you want to support IE 6 and 7, you need to have those browsers installed somewhere so that you can test with them. You can't just assume that your site will look fine in those old browsers.
Unfortunately, you can't install more than one version of IE. The bulk of IE's code does not get installed into its own subdirectory (or folder) but into Windows' system directory. Although there have been unofficial solutions available for some time among the webmaster community for installing different versions of IE into the same Windows installation, there are various peculiarities in the end result, and the IE versions you get from that behave slightly differently from the standard versions when installed normally. As such, I don't really recommend those "solutions". Instead, if you feel you really need to test with old versions of IE, you should probably try one of the following methods.

Method 1: How to Run More than One Version of Internet Explorer on a Single Machine: Using a Virtual Machine

The official Microsoft-sanctioned method of testing with multiple versions of IE on one computer is to install a virtual machine.
Loosely speaking, virtual machine software allow you to run another copy of Windows within your existing version of Mac OS X, Windows, Linux, FreeBSD or whatever. The virtual machine software pretends to be a new computer, and Windows gets installed into a small space on your hard disk which the software uses to mimic an entire drive.
Microsoft provides pre-activated copies of Windows with various versions of IE in virtual machines free of charge to web developers who need to test their sites in Internet Explorer. The pre-activated Windows expires periodically, so you will need to download a fresh copy from time to time.
You will also need to install one of the supported PC virtual machine software that can run those pre-activated Windows machines. For Windows users, this is either Virtual PC, VirtualBox or VMWare Player, all of which are free, and can be found on the Free PC Virtual Machines and Virtual Machines page. Mac OS X users can use either VirtualBox (which is free), Parallels Desktop (a commercial program) or VMWare Fusion (also a commercial program). Linux users can use VirtualBox.  
Once you've installed both the virtual machine software, and the virtual machine from Microsoft, all you have to do is to run it. This will give you a copy of the appropriate version of Windows with a matching version of IE, which you can use to surf to your website to test it.
Note: Microsoft has terminated its support of Windows XP on April 2014, so it's possible that they will stop providing virtual machines containing XP and Internet Explorer 6 eventually. If that's the case, it will no longer be possible for you to test IE 6 unless you have your own copy of Windows XP. I personally hope that when we reach that date, the number of IE 6 users will be so small that it's no longer even necessary for anyone to bother to test with that desperately obsolete version. You will still be able to test with IE 7 and above though, at least until the version of Windows that comes with those versions stops being supported.

Method 2: How to Run Two or Three Versions of IE on One Machine By Dual or Multi-Booting

This method is not recommended unless you have special reasons (other than testing websites) for needing to dual-boot or multi-boot. It is more technically demanding, disruptive, time-consuming and uses more hard disk space.
For the technically inclined, another way to run two versions of IE on a single machine is to install multiple versions of Windows on that machine, each in its own partition. In plain English, this means that you need to divide your hard disk into (at least) two sections, called "partitions". Then install different versions of Windows into different partitions. You may have to modify your Windows boot menu to support all of them, or use a third party boot manager. (Sorry for the vagueness in this paragraph, but I don't envisage many people to actually need to use this method, and those who do, already know how to do all this.)

How to Test Mac Browsers

Nowadays, you don't actually need a Mac to test Mac browsers, since the default Mac web browser, Safari, and alternative browsers like Firefox and Vivaldi have Windows equivalents.
Having said that, I'm not 100% sure if browsers display things exactly the same way in Windows as in Mac OS X, even if they are the same brand. That is, I'm not sure if (say) Safari for Windows displays things identically with Safari for Mac OS X. However, I think that for the most part, where my sites are concerned, the way they render things is sufficiently alike that I don't need to bother with specially getting a Mac just to test the sites.
Before you ask, although there are things such as free Mac emulators, which are software that run in Windows but pretend to be a Mac and thus can run Mac software, they are not particularly useful from a webmasters' point of view. The working Mac emulators tend to emulate the old obsolete Macs, not modern ones.
In any case, as I said earlier, you shouldn't need a Mac to develop a website that works on it. Just check that your website has valid code and test your website in the Windows versions of Safari, Firefox and Vivaldi, and you'll probably be fine. If, however, your site requires absolute precision in the positioning of its text, images and other elements, and you want to make sure it looks correct on a Mac, you will have no choice but to get a real Mac to test it on.

Testing Linux Browsers

One of the easiest ways to test your site under Linux is to run Linux from a CD or DVD. There are numerous Linux "live" CDs around; see the Free Linux LiveCD Distributions page for a list of them. These allow you to simply boot your machine from the DVD/CD directly into Linux without having to install anything onto your hard disk. Essentially, all you have to do is to download an ISO (which is just an image of the DVD or CD) of the Linux distribution, burn it to your CD or DVD, put it in your CD or DVD drive, and restart your computer. The computer boots from the media and runs Linux without installing anything on your hard disk. From the DVD (or CD), you can run many Linux applications, including the Linux version of Firefox and Konqueror.
If you are feeling lazy, and you have installed an emulator or a virtual machine, as mentioned above, you don't even need to burn the ISO to a CD. You can simply use the virtual machine to boot the ISO — your copy of Linux will then run in the virtual machine. Or, if you prefer, you can also directly install Linux into the virtual machine.
Yet another alternative is to install Linux on your hard disk, using one of the many free Linux distributions around. You can set it up so that it co-exists with Windows (ie, dual-boot). Make sure you have space for a new partition on your hard disk, install it and you're done.
The default browser that comes on many Linux distributions is Firefox (although not necessarily so). However, you will find that even though Firefox tries to render your page the same way under all platforms, the fonts available under Linux are different from those available on Windows. If you don't code your fonts in a cross-platform compatible way, your site may end up being rendered with an ugly font. For example, if your site only specifies "Arial" or "Impact" or some Windows-specific font, since these fonts are not available by default under non-Windows systems, your site will be rendered using either the default font or some other font that the browser thinks matches what you've specified.
If you don't want to bother to run Linux to test, be sure that you at least:
  1. Test your pages under Firefox for your platform.
  2. Specify alternative fonts for your web pages. For example, don't just select a font like "Arial" in your design. Specify alternatives as well, should Arial not be available, like "Helvetica" and a final fallback, something generic like "sans-serif". If you don't know how to do this, please see my article on choosing fonts for more information.

Whether you design your web page using a visual web editor 
 like Dreamweaver or KompoZer, or you code HTML directly with a simple text editor, the generally recommended practice is to validate it after you finish designing it.
This article discusses what validation means, points you to some of the free tools that you can use, and deals with its limitations and the problems that a new webmaster may face.
Note: if you are not sure what HTML and CSS mean, please read What are HTML, CSS, JavaScript, PHP and Perl? Do I Need to Learn Them to Create a Website? before continuing. Otherwise you'll be completely lost here since I assume you at least know what these terms mean.

What does Validating HTML or CSS Mean?

For those unfamiliar with the term, "validating" a page is just a jargon-filled way of referring to the use of a computer program to check that a web page is free of errors.
In particular, an HTML validator checks to make sure the HTML code on your web page complies with the standards set by the W3 Consortium, the organisation ("organization" in US English) that issues the HTML standards. There are various types of HTML validators: some only check for errors, while others also make suggestions about your code, telling you when it might lead to (say) unexpected results.
The W3 Consortium has its own online validator which you can use for free. It may be found at: http://validator.w3.org/
CSS validator checks your Cascading Style Sheet in the same manner. That is, it will check that it complies with the CSS standards set by the W3 Consortium. There are a few which will also tell you which CSS features are supported by which browsers (since not all browsers are equal in their CSS implementation).
Again, you can get free validation for your style sheets from the W3 Consortium: http://jigsaw.w3.org/css-validator/
There are numerous other validators around, both free and commercial, focusing on different aspects of your web page. You can find a list of free ones (including specialised validators like those that check your code for accessibility) from the Free HTML Validators, CSS Validators, Accessibility Validators page at 
http://www.thefreecountry.com/webmaster/htmlvalidators.shtml

Why Validate Your HTML and CSS Code?

There are a number of reasons why you should validate your page.

It Helps Cross-Browser, Cross-Platform and Future Compatibility

  1. Although you may be able to create a web page that appears to work on your favourite browser (whatever that may be), your page may contain HTML or CSS errors that do not show up with that browser due to an existing quirk or bug. Another person using a different browser that does not share that particular bug will end up viewing a page that does not show up correctly. It is also possible that later versions of your browser will fix that bug, and your page will be broken when people use its latest incarnation.
    Coding your pages so that it is correct without errors will result in pages that are more likely to work across browsers and platforms (ie, different systems). It is also a form of insurance against future versions of browsers, since all browsers aim towards compliance with the existing HTML and CSS standards.

Search Engine Visibility

  1. When there are errors in a web page, browsers typically try to compensate in different ways. Some may ignore the broken elements while others make assumptions about what the web designer was trying to achieve. The problem is that when search engines obtain your page and try to parse them for keywords, they will also have to make certain decisions about what to do with the errors. Like browsers, different search engines will probably make different decisions about those errors, resulting in certain parts of your web page (or perhaps even the entire page) not being indexed.
    The safest way to make sure the search engines see the page you want them to see is to present them an error-free page. That way, there is no dispute about which part of your page comprises the content and which the formatting code.

Limitations: What Validation Does Not Do

Validating your web page does not ensure that it will appear the way you want it to. It merely ensures that your code is without HTML or CSS errors.
If you are wondering what the difference is, an analogy from normal human language will hopefully make it clear. Let's take this sentence "Chris a sandwich ate" which is grammatically incorrect when used in a non-poetic context. It can be fixed by simply reversing the order of the last two words so that the sentence reads "Chris ate a sandwich".
But what happens if you write a sentence that says "Chris ate a pie" when you meant that he ate a sandwich? Syntactically, the sentence is correct, since all the elements of the sentence, subject ("Chris"), verb ("ate") and object ("a pie") are in the right order. Semantically, however, the sentence describes a different thing from what you meant.
HTML and CSS validators are designed to catch the first type of error, exemplified by the grammatical error of my first sentence. So if you write HTML code that has (say) the wrong order, the HTML validator will spot it and tell you. However, it cannot catch errors of the second kind, where you get the spelling and order and all other technical aspects correct, but the code you used does not match the meaning you intended.
Ensuring that your code does what you want it to do requires you to actually test it in a web browser. Depending on the complexity of your code, you may even want to test it with different browsers to make sure that your site looks the same in all of them, since it's possible that you are using features of HTML and CSS that are only implemented in some browsers but not others.

What to Do If You Don't Know HTML and CSS

If you have designed your site using a visual web editor, and are not familiar with HTML and CSS, you will face an additional problem.
While running the validator and getting it to validate your page itself will not be an issue (since the W3 Consortium's validator is not only free, it doesn't even have to be installed to be used), the problem comes when the validator checks your page and tells you that there are errors.
If you have no knowledge of HTML and CSS, you will probably have some difficulty figuring out what those errors mean, whether they are serious, and how to fix them.
Although there is no perfect solution to this, you are not completely without resources.
  1. If you are using an editor like Dreamweaver, Microsoft's Expression Web, KompoZer or BlueGriffon, you can usually assume that the code they produce on their own is valid. From my limited experience (mainly creating demo sites for the purpose of writing tutorials or reviews for thesitewizard.com), these four editors seem to create correct HTML and CSS code.
    This means that if you get errors when you validate your page, the problems must come from elsewhere. If you have inserted code that you obtained from a website (such as if you have added a Youtube video to your page), it's possible that the code is the source of the error message.
    Alternatively, if you have modified the code on the page manually, the error may have crept in there.
    Having said that, sometimes the error is benign. For example, if you have added XHTML code to a page that has HTML, you may or may not get validation errors since you are mixing 2 different HTML families that have slightly different conventions. As far as I can tell, for the most part, this kind of error does not cause any problem for either browsers or search engines.
  2. Another way is to search the Internet for the solution. For example, you can copy and paste the error message given by the validator into a search engine, and see if there are any websites out there that talk about this particular error. This may not be as fantastic an idea as it first appears, since their solution may be too general to be helpful for your specific problem, unless the error message is the result of your pasting code from some popular source (like Youtube or something of that level of popularity).
  3. A third way is of course to ask someone, whether it's someone you know personally, or someone on the Internet. This solution also has its own issues, since you may get a solution that creates a bigger mess of your page than it had in the first place. It all boils down to their competence and willingness to spend enough time figuring out the problem.
  4. Finally, you can also ignore the problem. If you want to do this, you should test your web page in as many web browsers you can to make sure the error message does not diagnose a problem that causes visible issues. If you find that your site seems to work fine in spite of the error, you may decide to just ignore it and hope for the best.
    Although this solution is not ideal, you may be forced to take it if you can't find an alternative. It's not ideal because the error may bite you later when you least expect it, for example, when there's a new version of some web browser that chokes on the bad code. It may also cause problems in a non-visible manner, such as in the way the search engines index your page.

How Often Should I Validate?

Some people validate every time they make a modification to their pages on the grounds that careless mistakes can occur any time. Others validate only when they make a major design change.
I always validate the template for my pages when I make a major design change. I try to validate my pages each time I make modifications, although I must admit that I sometimes forget to do so (with the occasional disastrous consequence; Murphy's Law doesn't spare webmasters).
I find that having an offline validator helps to make sure that I remember to validate: having to go online just to validate my pages tends to make me put off validation till later, with the result that it'll occasionally get overlooked. For those not familiar with the terminology I use, when I say "offline validator" I simply mean a validator that I can download and install in my own computer so that I can run it on my pages without having to go to the W3 Consortium's website. You can find offline validators on the free validators page I mentioned earlier, that is, http://www.thefreecountry.com/webmaster/htmlvalidators.shtml
The HTML Tidy validator (listed on that page) is available for numerous platforms (including Linux, Mac, Windows, etc) and has proven helpful to many webmasters the world over.

Final words: 

It's a good idea to test your site with multiple versions of multiple browsers, particularly if you plan to do anything fancy with style sheets on your site. This doesn't mean that you have to support all browsers — for example, the pages on thesitewizard.com do not work with very old browsers. However, when you are able to test your pages this way, you can at least reduce the number of problems your pages have with the different browsers. The tips in this article allow you to test with multiple browsers even if you have only one machine. As I mentioned above, it's generally a good idea to validate your web page. It will point you to errors that may affect how your website is understood by web browsers and search engines. Even if you are not familiar with HTML and CSS, there are still some ways you can deal with the errors that you discover from validating your page.