Sunday, June 19, 2016

How To Set Up Basic HTTP Authentication With Nginx on CentOS 7

Introduction

Nginx is one of the leading web servers in active use. It and its commercial edition, Nginx Plus, are developed by Nginx, Inc.

In this tutorial, you'll learn how to restrict access to an Nginx-powered website using the HTTP basic authentication method on Ubuntu 14.04. HTTP basic authentication is a simple username and (hashed) password authentication method.
Prerequisites



Step 1 — Installing HTTPD Tools

You'll need the htpassword command to configure the password that will restrict access to the target website. This command is part of the httpd-tools package, so the first step is to install that package.

    sudo yum install -y httpd-tools

Step 2 — Setting Up HTTP Basic Authentication Credentials

In this step, you'll create a password for the user running the website.

That password and the associated username will be stored in a file that you specify. The password will be encrypted and the name of the file can be anything you like. Here, we use the file /etc/nginx/.htpasswd and the username nginx.

To create the password, run the following command.

    sudo htpasswd -c /etc/nginx/.htpasswd nginx

You can check the contents of the newly-created file to see the username and hashed password.

    cat /etc/nginx/.htpasswd

Example /etc/nginx/.htpasswd

nginx:$apr1$ilgq7ZEO$OarDX15gjKAxuxzv0JTrO/

Step 3 — Updating the Nginx Configuration

Now that you've created the HTTP basic authentication credential, the next step is to update the Nginx configuration for the target website to use it.

HTTP basic authentication is made possible by the auth_basic and auth_basic_user_file directives. The value of auth_basic is any string, and will be displayed at the authentication prompt; the value of auth_basic_user_file is the path to the password file that was created in Step 2.

Both directives should be in the configuration file of the target website, which is normally located in the /etc/nginx/ directory. Open that file using nano or your favorite text editor.

    sudo nano /etc/nginx/nginx.conf

Under the server section, add both directives:
/etc/nginx/nginx.conf

. . .
server {
    listen       80 default_server;
    listen       [::]:80 default_server;
    server_name  _;
    root         /usr/share/nginx/html;

    auth_basic "Private Property";
    auth_basic_user_file /etc/nginx/.htpasswd;
. . .

Save and close the file.
Step 4 — Testing the Setup

To apply the changes, first reload Nginx.

    sudo systemctl reload nginx

Now try accessing the website you just secured by going to http://your_server_ip/ in your favorite browser. You should be presented with an authentication window (which says "Private Property", the string we set for auth_basic), and you will not be able to access the website until you enter the correct credentials. If you enter the username and password you set, you'll see the default Nginx home page.
Conclusion

You've just completed basic access restriction for an Nginx website. More information about this technique and other means of access restriction are available in Nginx's documentation.

How To Set Up Basic HTTP Authentication With Nginx on Ubuntu 14.04

Introduction

Nginx is one of the leading web servers in active use. It and its commercial edition, Nginx Plus, are developed by Nginx, Inc.

In this tutorial, you'll learn how to restrict access to an Nginx-powered website using the HTTP basic authentication method on Ubuntu 14.04. HTTP basic authentication is a simple username and (hashed) password authentication method.
Prerequisites

To complete this tutorial, you'll need the following:

    One Ubuntu 14.04 Droplet with a sudo non-root user, which you can set up by following this initial server setup tutorial.

    Nginx installed and configured on your server, which you can do by following this Nginx article.

Step 1 — Installing Apache Tools

You'll need the htpassword command to configure the password that will restrict access to the target website. This command is part of the apache2-utils package, so the first step is to install that package.

    sudo apt-get install apache2-utils

Step 2 — Setting Up HTTP Basic Authentication Credentials

In this step, you'll create a password for the user running the website.

That password and the associated username will be stored in a file that you specify. The password will be encrypted and the name of the file can be anything you like. Here, we use the file /etc/nginx/.htpasswd and the username nginx.

To create the password, run the following command. You'll need to authenticate, then specify and confirm a password.

    sudo htpasswd -c /etc/nginx/.htpasswd nginx

You can check the contents of the newly-created file to see the username and hashed password.

    cat /etc/nginx/.htpasswd

Example /etc/nginx/.htpasswd

nginx:$apr1$ilgq7ZEO$OarDX15gjKAxuxzv0JTrO/

Step 3 — Updating the Nginx Configuration

Now that you've created the HTTP basic authentication credential, the next step is to update the Nginx configuration for the target website to use it.

HTTP basic authentication is made possible by the auth_basic and auth_basic_user_file directives. The value of auth_basic is any string, and will be displayed at the authentication prompt; the value of auth_basic_user_file is the path to the password file that was created in Step 2.

Both directives should be in the configuration file of the target website, which is normally located in /etc/nginx/sites-available directory. Open that file using nano or your favorite text editor.

    sudo nano /etc/nginx/sites-available/default

Under the location section, add both directives:
/etc/nginx/sites-available/default.conf

. . .
server_name localhost;

location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ =404;
        # Uncomment to enable naxsi on this location
        # include /etc/nginx/naxsi.rules
        auth_basic "Private Property";
        auth_basic_user_file /etc/nginx/.htpasswd;
}
. . .

Save and close the file.
Step 4 — Testing the Setup

To apply the changes, first reload Nginx.

    sudo service nginx reload

Now try accessing the website you just secured by going to http://your_server_ip/ in your favorite browser. You should be presented with an authentication window (which says "Private Property", the string we set for auth_basic), and you will not be able to access the website until you enter the correct credentials. If you enter the username and password you set, you'll see the default Nginx home page.
Conclusion

You've just completed basic access restriction for an Nginx website. More information about this technique and other means of access restriction are available in Nginx's documentation.

How To Create a Self-Signed SSL Certificate for Apache in Ubuntu 16.04

Introduction

TLS, or transport layer security, and its predecessor SSL, which stands for secure sockets layer, are web protocols used to wrap normal traffic in a protected, encrypted wrapper.

Using this technology, servers can send traffic safely between the server and clients without the possibility of the messages being intercepted by outside parties. The certificate system also assists users in verifying the identity of the sites that they are connecting with.

In this guide, we will show you how to set up a self-signed SSL certificate for use with an Apache web server on an Ubuntu 16.04 server.

Note: A self-signed certificate will encrypt communication between your server and any clients. However, because it is not signed by any of the trusted certificate authorities included with web browsers, users cannot use the certificate to validate the identity of your server automatically.

A self-signed certificate may be appropriate if you do not have a domain name associated with your server and for instances where the encrypted web interface is not user-facing. If you do have a domain name, in many cases it is better to use a CA-signed certificate. You can find out how to set up a free trusted certificate with the Let's Encrypt project here.
Prerequisites

Before you begin, you should have a non-root user configured with sudo privileges. You can learn how to set up such a user account by following our initial server setup for Ubuntu 16.04.

You will also need to have the Apache web server installed. If you would like to install an entire LAMP (Linux, Apache, MySQL, PHP) stack on your server, you can follow our guide on setting up LAMP on Ubuntu 16.04. If you just want the Apache web server, skip the steps pertaining to PHP and MySQL in the guide.

When you have completed the prerequisites, continue below.
Step 1: Create the SSL Certificate

TLS/SSL works by using a combination of a public certificate and a private key. The SSL key is kept secret on the server. It is used to encrypt content sent to clients. The SSL certificate is publicly shared with anyone requesting the content. It can be used to decrypt the content signed by the associated SSL key.

We can create a self-signed key and certificate pair with OpenSSL in a single command:

    sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt

You will be asked a series of questions. Before we go over that, let's take a look at what is happening in the command we are issuing:

    openssl: This is the basic command line tool for creating and managing OpenSSL certificates, keys, and other files.
    req: This subcommand specifies that we want to use X.509 certificate signing request (CSR) management. The "X.509" is a public key infrastructure standard that SSL and TLS adheres to for its key and certificate management. We want to create a new X.509 cert, so we are using this subcommand.
    -x509: This further modifies the previous subcommand by telling the utility that we want to make a self-signed certificate instead of generating a certificate signing request, as would normally happen.
    -nodes: This tells OpenSSL to skip the option to secure our certificate with a passphrase. We need Apache to be able to read the file, without user intervention, when the server starts up. A passphrase would prevent this from happening because we would have to enter it after every restart.
    -days 365: This option sets the length of time that the certificate will be considered valid. We set it for one year here.
    -newkey rsa:2048: This specifies that we want to generate a new certificate and a new key at the same time. We did not create the key that is required to sign the certificate in a previous step, so we need to create it along with the certificate. The rsa:2048 portion tells it to make an RSA key that is 2048 bits long.
    -keyout: This line tells OpenSSL where to place the generated private key file that we are creating.
    -out: This tells OpenSSL where to place the certificate that we are creating.

As we stated above, these options will create both a key file and a certificate. We will be asked a few questions about our server in order to embed the information correctly in the certificate.

Fill out the prompts appropriately. The most important line is the one that requests the Common Name (e.g. server FQDN or YOUR name). You need to enter the domain name associated with your server or, more likely, your server's public IP address.

The entirety of the prompts will look something like this:

Output
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.
Organizational Unit Name (eg, section) []:Ministry of Water Slides
Common Name (e.g. server FQDN or YOUR name) []:server_IP_address
Email Address []:admin@your_domain.com

Both of the files you created will be placed in the appropriate subdirectories of the /etc/ssl directory.

While we are using OpenSSL, we should also create a strong Diffie-Hellman group, which is used in negotiating Perfect Forward Secrecy with clients.

We can do this by typing:

    sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

This may take a few minutes, but when it's done you will have a strong DH group at /etc/ssl/certs/dhparam.pem that we can use in our configuration.
Step 2: Configure Apache to Use SSL

We have created our key and certificate files under the /etc/ssl directory. Now we just need to modify our Apache configuration to take advantage of these.

We will make a few adjustments to our configuration:

    We will create a configuration snippet to specify strong default SSL settings.
    We will modify the included SSL Apache Virtual Host file to point to our generated SSL certificates.
    (Recommended) We will modify the unencrypted Virtual Host file to automatically redirect requests to the encrypted Virtual Host.

When we are finished, we should have a secure SSL configuration.
Create an Apache Configuration Snippet with Strong Encryption Settings

First, we will create an Apache configuration snippet to define some SSL settings. This will set Apache up with a strong SSL cipher suite and enable some advanced features that will help keep our server secure. The parameters we will set can be used by any Virtual Hosts enabling SSL.

Create a new snippet in the /etc/apache2/conf-available directory. We will name the file ssl-params.conf to make its purpose clear:

    sudo nano /etc/apache2/conf-available/ssl-params.conf

To set up Apache SSL securely, we will be using the recommendations by Remy van Elst on the Cipherli.st site. This site is designed to provide easy-to-consume encryption settings for popular software. You can read more about his decisions regarding the Apache choices here.

The suggested settings on the site linked to above offer strong security. Sometimes, this comes at the cost of greater client compatibility. If you need to support older clients, there is an alternative list that can be accessed by clicking the link on the page labelled "Yes, give me a ciphersuite that works with legacy / old software." That list can be substituted for the items copied below.

The choice of which config you use will depend largely on what you need to support. They both will provide great security.

For our purposes, we can copy the provided settings in their entirety. We will also go ahead and set the SSLOpenSSLConfCmd DHParameters setting to point to the Diffie-Hellman file we generated earlier:
/etc/apache2/conf-available/ssl-params.conf

# from https://cipherli.st/
# and https://raymii.org/s/tutorials/Strong_SSL_Security_On_Apache2.html

SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder On
Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
Header always set X-Frame-Options DENY
Header always set X-Content-Type-Options nosniff
# Requires Apache >= 2.4
SSLCompression off
SSLSessionTickets Off
SSLUseStapling on
SSLStaplingCache "shmcb:logs/stapling-cache(150000)"

SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem"

Save and close the file when you are finished.
Modify the Default Apache SSL Virtual Host File

Next, let's modify /etc/apache2/sites-available/default-ssl.conf, the default Apache SSL Virtual Host file. If you are using a different server block file, substitute it's name in the commands below.

Before we go any further, let's back up the original SSL Virtual Host file:

    sudo cp /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-available/default-ssl.conf.bak

Now, open the SSL Virtual Host file to make adjustments:

    sudo nano /etc/apache2/sites-available/default-ssl.conf

Inside, with most of the comments removed, the Virtual Host file should look something like this by default:
/etc/apache2/sites-available/default-ssl.conf

<IfModule mod_ssl.c>
        <VirtualHost _default_:443>
                ServerAdmin webmaster@localhost

                DocumentRoot /var/www/html

                ErrorLog ${APACHE_LOG_DIR}/error.log
                CustomLog ${APACHE_LOG_DIR}/access.log combined

                SSLEngine on

                SSLCertificateFile      /etc/ssl/certs/ssl-cert-snakeoil.pem
                SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

                <FilesMatch "\.(cgi|shtml|phtml|php)$">
                                SSLOptions +StdEnvVars
                </FilesMatch>
                <Directory /usr/lib/cgi-bin>
                                SSLOptions +StdEnvVars
                </Directory>

                # BrowserMatch "MSIE [2-6]" \
                #               nokeepalive ssl-unclean-shutdown \
                #               downgrade-1.0 force-response-1.0

        </VirtualHost>
</IfModule>

We will be making some minor adjustments to the file. We will set the normal things we'd want to adjust in a Virtual Host file (ServerAdmin email address, ServerName, etc.), adjust the SSL directives to point to our certificate and key files, and uncomment one section that provides compatibility for older browsers.

After making these changes, your server block should look similar to this:
/etc/apache2/sites-available/default-ssl.conf

<IfModule mod_ssl.c>
        <VirtualHost _default_:443>
                ServerAdmin your_email@example.com
                ServerName server_domain_or_IP

                DocumentRoot /var/www/html

                ErrorLog ${APACHE_LOG_DIR}/error.log
                CustomLog ${APACHE_LOG_DIR}/access.log combined

                SSLEngine on

                SSLCertificateFile      /etc/ssl/certs/apache-selfsigned.crt
                SSLCertificateKeyFile /etc/ssl/private/apache-selfsigned.key

                <FilesMatch "\.(cgi|shtml|phtml|php)$">
                                SSLOptions +StdEnvVars
                </FilesMatch>
                <Directory /usr/lib/cgi-bin>
                                SSLOptions +StdEnvVars
                </Directory>

                BrowserMatch "MSIE [2-6]" \
                               nokeepalive ssl-unclean-shutdown \
                               downgrade-1.0 force-response-1.0

        </VirtualHost>
</IfModule>

Save and close the file when you are finished.
(Recommended) Modify the Unencrypted Virtual Host File to Redirect to HTTPS

As it stands now, the server will provide both unencrypted HTTP and encrypted HTTPS traffic. For better security, it is recommended in most cases to redirect HTTP to HTTPS automatically. If you do not want or need this functionality, you can safely skip this section.

To adjust the unencrypted Virtual Host file to redirect all traffic to be SSL encrypted, we can open the /etc/apache2/sites-available/000-default.conf file:

    sudo nano /etc/apache2/sites-available/000-default.conf

Inside, within the VirtualHost configuration blocks, we just need to add a Redirect directive, pointing all traffic to the SSL version of the site:
/etc/apache2/sites-available/000-default.conf

<VirtualHost *:80>
        . . .

        Redirect "/" "https://your_domain_or_IP"

        . . .
</VirtualHost>

Save and close the file when you are finished.
Step 3: Adjust the Firewall

If you have the ufw firewall enabled, as recommended by the prerequisite guides, might need to adjust the settings to allow for SSL traffic. Luckily, Apache registers a few profiles with ufw upon installation.

We can see the available profiles by typing:

    sudo ufw app list

You should see a list like this:

Output
Available applications:
  Apache
  Apache Full
  Apache Secure
  OpenSSH

You can see the current setting by typing:

    sudo ufw status

If you allowed only regular HTTP traffic earlier, your output might look like this:

Output
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Apache                     ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Apache (v6)                ALLOW       Anywhere (v6)

To additionally let in HTTPS traffic, we can allow the "Apache Full" profile and then delete the redundant "Apache" profile allowance:

    sudo ufw allow 'Apache Full'
    sudo ufw delete allow 'Apache'

Your status should look like this now:

    sudo ufw status

Output
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Apache Full                ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Apache Full (v6)           ALLOW       Anywhere (v6)

Step 4: Enable the Changes in Apache

Now that we've made our changes and adjusted our firewall, we can enable the SSL and headers modules in Apache, enable our SSL-ready Virtual Host, and restart Apache.

We can enable mod_ssl, the Apache SSL module, and mod_headers, needed by some of the settings in our SSL snippet, with the a2enmod command:

    sudo a2enmod ssl
    sudo a2enmod headers

Next, we can enable our SSL Virtual Host with the a2ensite command:

    sudo a2ensite default-ssl

We will also need to enable our ssl-params.conf file, to read in the values we set:

    sudo a2enconf ssl-params

At this point, our site and the necessary modules are enabled. We should check to make sure that there are no syntax errors in our files. We can do this by typing:

    sudo apache2ctl configtest

If everything is successful, you will get a result that looks like this:

Output
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
Syntax OK

The first line is just a message telling you that the ServerName directive is not set globally. If you want to get rid of that message, you can set ServerName to your server's domain name or IP address in /etc/apache2/apache2.conf. This is optional as the message will do no harm.

If your output has Syntax OK in it, your configuration file has no syntax errors. We can safely restart Apache to implement our changes:

    sudo systemctl restart apache2

Step 5: Test Encryption

Now, we're ready to test our SSL server.

Open your web browser and type https:// followed by your server's domain name or IP into the address bar:

https://server_domain_or_IP

Because the certificate we created isn't signed by one of your browser's trusted certificate authorities, you will likely see a scary looking warning like the one below:

Apache self-signed cert warning

This is expected and normal. We are only interested in the encryption aspect of our certificate, not the third party validation of our host's authenticity. Click "ADVANCED" and then the link provided to proceed to your host anyways:

Apache self-signed override

You should be taken to your site. If you look in the browser address bar, you will see a lock with an "x" over it. In this case, this just means that the certificate cannot be validated. It is still encrypting your connection.

If you configured Apache to redirect HTTP to HTTPS, you can also check whether the redirect functions correctly:

http://server_domain_or_IP

If this results in the same icon, this means that your redirect worked correctly.
Step 6: Change to a Permanent Redirect

If your redirect worked correctly and you are sure you want to allow only encrypted traffic, you should modify the unencrypted Apache Virtual Host again to make the redirect permanent.

Open your server block configuration file again:

    sudo nano /etc/apache2/sites-available/000-default.conf

Find the Redirect line we added earlier. Add permanent to that line, which changes the redirect from a 302 temporary redirect to a 301 permanent redirect:
/etc/apache2/sites-available/000-default.conf

<VirtualHost *:80>
        . . .

        Redirect permanent "/" "https://your_domain_or_IP"

        . . .
</VirtualHost>

Save and close the file.

Check your configuration for syntax errors:

    sudo apache2ctl configtest

When you're ready, restart Apache to make the redirect permanent:

    sudo systemctl restart apache2

Conclusion

You have configured your Apache server to use strong encryption for client connections. This will allow you serve requests securely, and will prevent outside parties from reading your traffic.

Friday, June 17, 2016

How to Install HAProxy Load Balancer on CentOS

Installing HAProxy 1.6

As a fast developing opensource application HAProxy available for install in the CentOS default repositories might not be of the latest release. To find out what version number is being offered through the official channels enter the following command
sudo yum info haproxy
HAProxy has always three active stable versions of the releases, two of the latest versions in development plus a third older version that is still receiving critical updates. You can always check the currently newest stable version listed on HAProxy website and then decide which version you wish to go with.
In this guide we’ll be installing the currently latest stable version of 1.6, which was not yet available in the standard repositories. Instead you’ll need to install it from the source, but before this check that you have the prerequisites to download and compile the program.
sudo yum install wget gcc pcre-static pcre-devel -y
Download the source code with the command below. You can check if there’s a newer version available at the HAProxy download page and then replace the download link in the wget command with the latest.
wget http://www.haproxy.org/download/1.6/src/haproxy-1.6.3.tar.gz -O ~/haproxy.tar.gz
Once the download is complete, extract the files using the following
tar xzvf ~/haproxy.tar.gz -C ~/
Change into the directory.
cd ~/haproxy-1.6.3
Then compile the program for your system.
make TARGET=linux2628
And finally install HAProxy itself.
sudo make install
To complete the install, use the following commands to copy the settings over.
sudo cp /usr/local/sbin/haproxy /usr/sbin/
sudo cp ~/haproxy-1.6.3/examples/haproxy.init /etc/init.d/haproxy
sudo chmod 755 /etc/init.d/haproxy
Create these directories and the statistics file for HAProxy to record in.
sudo mkdir -p /etc/haproxy
sudo mkdir -p /run/haproxy
sudo mkdir -p /var/lib/haproxy
sudo touch /var/lib/haproxy/stats
Then add a new user for HAProxy.
sudo useradd -r haproxy
After the installation you can double check the installed version number with the following
sudo haproxy -v
HA-Proxy version 1.6.3 2015/12/25
Copyright 2000-2015 Willy Tarreau <willy@haproxy.org>
In this case the version is 1.6.3 like shown in the example output above.

Configuring the load balancer

Setting up HAProxy for load balancing is a quite straight forward process. Basically all you need to do is tell HAProxy what kind of connections it should be listening for and which servers it should relay the connections to. This is done by creating a configuration file /etc/haproxy/haproxy.cfg with the defining settings. You can read about the configuration options at HAProxy documentation if you wish to find out more.
Open a .cfg file for edit for example using vi with the following command
sudo vi /etc/haproxy/haproxy.cfg
Add the following sections to the the file. Replace the <server name> with what ever you want to call you servers on the statistics page and the <private IP> with the private IPs for the servers you wish to direct the web traffic to. You can check the private IPs at your UpCloud Control Panel and Private network -tab under Network -menu.
global
   log /dev/log local0
   log /dev/log local1 notice
   chroot /var/lib/haproxy
   stats socket /run/haproxy/admin.sock mode 660 level admin
   stats timeout 30s
   user haproxy
   group haproxy
   daemon

defaults
   log global
   mode http
   option httplog
   option dontlognull
   timeout connect 5000
   timeout client 50000
   timeout server 50000

frontend http_front
   bind *:80
   stats uri /haproxy?stats
   default_backend http_back

backend http_back
   balance roundrobin
   server <server name> <private IP>:80 check
   server <server name> <private IP>:80 check
This defines a layer 4 load balancer with a front-end name http_front listening to the port number 80, which then directs the traffic to the default back-end name http_back. The additional stats uri /haproxy?stats enables the statistics page at that specified address. Configuring the servers in the back-end section allows HAProxy to use these servers for load balancing whenever available according to the roundrobin algorithm.
The balancing algorithms are used to decide which server at the back-end each connection is transferred to. Some of the useful options include the following:
  • Roundrobin: Each server is used in turns according to their weights. This is the smoothest and fairest algorithm when the servers’ processing time remains equally distributed. This algorithm is dynamic, which allows server weights to be adjusted on the fly.
  • Leastconn: The server with the lowest number of connections is chosen. Round-robin is performed between servers with the same load. Using this algorithm is recommended with long sessions, such as LDAP, SQL, TSE, etc, but it’s not very well suited for short sessions such as HTTP.
  • First: The first server with available connection slots receives the connection. The servers are chosen from the lowest numeric identifier to the highest, which defaults to the server’s position in the farm. Once a server reaches its maxconn value, the next server is used.
  • Source: The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This way the same client IP address will always reach the same server while the servers stay the same.
An other possibility is to configure the load balancer to work on layer 7, this can be useful when parts of your web application are located on different hosts. This can be accomplished by conditioning the connection transfer for example by the URL.
  • frontend http_front
       bind *:80
       stats uri /haproxy?stats
       acl url_blog path_beg /blog
       use_backend blog_back if url_blog
       default_backend http_back
    
    backend http_back
       balance roundrobin
       server <server name> <private IP>:80 check
       server <server name> <private IP>:80 check
    
    backend blog_back
       server <server name> <private IP>:80 check
The front-end declares an ACL -rule named url_blog that applies to all connections which path begins with /blog, and use_backend defines that connections matching the url_blog condition should be served by the back-end named blog_back.
At the back-end side the configuration sets up two server groups, http_back like before and the new one called blog_back that servers specifically connections to domain.com/blog.
After making the configurations, save the file and restart HAProxy with the following
sudo systemctl restart haproxy
If you get any errors or warnings at start up, check the configuration for any mistypes and that you’ve created all the necessary files and folders, then try restarting again.

Testing the setup

With the HAProxy configured and running, open your load balancer server’s public IP in a web browser and check that you get connected to your back-end correctly. The parameter stats uri in the configuration enables the statistics page at the defined address.
http://<load balancer public IP>/haproxy?stats
When you load the statistics page and all of your servers are listed in green your configuration was successful!
In case your load balancer does not reply, check that HTTP connections are not getting blocked by the firewall. Since you most likely deployed a fresh install of CentOS 7 for this project, the host is rather restrictive by default. You can use the following commands to add these rules and to restart the firewall.
sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-port=8181/tcp
sudo firewall-cmd --reload
The statistics page contains some helpful information to keep track of your web hosts including up- and downtimes and session counts. If a server is listed in red, check that the server is powered on and that you can ping it from the load balancer machine.

Installing HAProxy on Ubuntu 14.04

HAProxy is a network software application that offers high availability, load balancing, and proxying for TCP and HTTP network applications. It is suited for high traffic and powers many websites. This article will show you how to install and setup HAProxy on Ubuntu 14.04.
Although HAProxy has several prominent features, this article focuses on how to setup HAProxy to "proxy" your web application.

Installing HAProxy

Since Ubuntu 14.04 does not ship with HAProxy 1.5 (latest stable release at time of writing), we will have to use a PPA to be able to install it using apt-get:
add-apt-repository ppa:vbernat/haproxy-1.5
Next, update the system:
apt-get update
apt-get dist-upgrade
Now install HAProxy with the following command:
apt-get install haproxy
If everything is successful, then you have finished installing HAProxy and can proceed to the next step.

Configuring HAProxy

The HA Proxy configuration file is split up into two sections – "global", and "proxies". One deals with process-wide configuration, while the other consists of default configuration, frontend, and backend sections.

Global Section

With your favorite text editor, open /etc/haproxy/haproxy.cfg and you will notice the predefined sections: "global" and "defaults". The first thing that you may want to do is increase the maxconn to a reasonable size, as this affects the connections that HAProxy allows. Too many connections may cause your web service to crash due to many requests. You will need to adjust the size to see what works for you. In the global section, add or change maxconn to 3072.
In the default section, add the following lines under mode http:
option forwardfor
option http-server-close
This will add X-Forwarded-For headers to each request, as well as reduce the latency between HAProxy and to preserve client persistent connections.

Proxies Section

Frontend and Backend
Commonly, the first thing when it comes to using is to setup a frontend to handle HTTP connections. Add the following:
frontend http-frontend
    bind public_ip:80
    reqadd X-Forwarded-Proto:\ http
    default_backend wwwbackend
Note: Be sure to replace public_ip with your domain or your public ip. Otherwise, this entire setup will not work.
After you have finished configuring the frontend, you can now add your backend by adding the following lines to the end of your configuration:
backend wwwbackend
    server 1-www private_ip_1:80 check
    server 2-www private_ip_2:80 check
    server 3-www private_ip_3:80 check
The backend configuration used here creates a connection named X-www to private_ip_X:80 (Replace X with 1 – 3. and replace private_ip_X with your private or public ip). This will allow you to load balance between each server set (assuming you have more than one server). The check option makes the load balancer perform health checks on the server.
When you are done, save the configuration file, then restart HAProxy by running:
service haproxy restart
If everything is working, then you will be able to connect to http://public_ip/ (replacing it with your Vultr VPS IP) and view your website.

77 useful Linux commands and utilities

How to use the alias command in Linux.
apt-get
Apt-get is a tool to automatically update a Debian machine and get and install Debian packages/programs.
How to manage software on Ubuntu Server with "aptitude" and "apt-get".
Understanding the Debian archives and apt-get.
Inside the Red Hat and Debian package management differences.

Aspell
GNU Aspell is a free and open source spell checker designed to replace Ispell. It can either be used as a library or as an independent spell checker.
How to use Aspell to check spelling.
AWK, Gawk
A programming-language tool used to manipulate text. The language of the AWK utility resembles the shell-programming language in many areas, although AWK's syntax is very much its own.
Learn how to use the AWK utility.

Gawk is the GNU Project's version of the AWK programming language.

bzip2
A portable, fast open source program used to compress and decompress files at high rates.
How to use bzip2 in Linux.
More on how to use the bzip2 compression program.

cat
A Unix/Linux command that can read, modify or concatenate text files, most commonly used for displaying contents of files.
See how to use cat to display contents of a file in Linux.
An article on what you can do with the cat command.
cd
The cd command changes the current directory in Linux, and can toggle between directories conveniently. It is similar to the CD and CHDIR commands in MS-DOS.
See more on how to use the cd command to change directories.
chmod
Chmod changes the access mode (permissions) of one or more files. Only the owner of a file or a privileged user may change the mode.
See examples of changing the permissions of files using chmod.
chown
Chown changes file or group ownership, and has options to change ownership of all objects within a directory tree, and view information on objects processed.
Learn how to change file ownership with chown.
cmp
The cmp utility compares two files of any type and writes the results to the standard output. By default, cmp is silent if the files are the same; if they differ, the byte and line number at which the first difference occurred is reported.
See IBM's examples for using cmp.
comm
Comm compares lines common to the sorted files file1 and file2. Output is in three columns, from left to right: lines unique to file1, lines unique to file2, and lines common to both files.
More on comparing lines with comm.
Read a brief tutorial on using comm.
cp
The cp command copies files and directories, and copies can be made simultaneously to another directory if the copy is under a different name.
Find out how to copy Linux files and directories with the cp command.
cpio
Cpio copies files into or out of a cpio or tar archive, which is a file that contains other files plus information about them, such as their file name, owner, timestamps, and access permissions. The archive can be another file on the disk, a magnetic tape, or a pipe. Cpio has three operating modes, and is a more efficient alternative to tar.
Learn how to use cpio when moving files in a Unix-to-Linux port.
See how to back up files with cpio.
CRON
CRON is a Linux system process that will execute a program at a preset time. To use CRON, a user must prepare a text file that describes the program to be executed and the times that CRON should execute them. Then, the crontab program is used to load the text file that describes the CRON jobs into CRON.
Using CRON to execute programs at specific times.

date
Date sets a system's date and time. Also a useful way to output/print current information when working in a script file.
A few more examples from IBM on setting date and time with date.
declare
Declare declares variables, gives them attributes, or modifies properties of variables.
Examples of declaring variables with declare.
df
Df displays the amount of disk space available on the file system containing each file name argument. With no file name, available space on all currently mounted file systems is shown.
More on using df to display the amount of disk space available.

echo
Echo allows a user to repeat, or "echo," a string variable to standard output.
More on using the Echo command with shell scripts.
enable
Enable will stop or start printers or classes.
Examples of how to enable LP printers.
env
Env runs a program in a modified environment, or displays the current environment and its variables.
Examples of changing environment variables using env.
eval
Eval evaluates several arguments and concatenates them into a single command, and then reports on that argument's status.
More on concatenating arguments with eval.
exec
Exec replaces the parent process by whatever command is typed. This command treats its arguments as the specification of one or more sub processes to execute.
More examples of replacing parent processes with exec.
exit
The exit command terminates a script, and can return a value to the parent script.
More on terminating scripts with exit.
expect
Expect talks to other interactive programs according to a script, and waits for a response, often from any string that matches a given pattern.
Using expect for responses.
export
Export converts files into another format than the one it is currently in. Once a file is exported, it can be accessed by any application that uses the format.
Examples of exporting data from a database with export.

find
Find searches the directory tree to find particular groups of files that meet specified conditions, including --name and --type, -exec and --size, and --mtime and --user.
Efficiently locating files with find.
for, while
For and while are used to execute or loop items repeatedly as long as conditions are met.
More on looping items with the for command.
More on looping items with the while command.
free
Free displays the total amount of free and used physical memory and swap space in the system, as well as the buffers and cache used by the kernel.
Learn how to use the free command to optimize a computer's memory.

gawk
See AWK.
grep
Grep searches file(s) for a given character string or pattern, and can replace the string with another one. One method of searching for files within Linux.
Examples of searching with grep.
gzip
Gzip is the GNU project's open source program used for file compression, compressing web pages on the server end for decompression in the browser. Popular for streaming media compression, and can concatenate and compress several streams simultaneously.
Examples of using gzip for compressing files.

ifconfig
Ifconfig is used to configure the kernel-resident network interfaces. It is used at boot time to set up interfaces. After that, it is usually only needed when debugging or when system tuning is needed.
Examples of using iconfig to configure a network.
Using ifconfig to detect Linux network configuration problems.
ifup
Ifup configures a network interface/enables a network connection.
More on the ifup command in configuring network interfaces.
ifdown
Ifdown shuts down a network interface/disables a network connection.
More on shutting down networks with ifdown.

less, more
The less command lets an admin scroll through configuration and error log files, displaying text files one screen at a time, with backward or forward moving available in files. More mobility within files than in more.
View several different file types with less.

Similar to less, more pages through text one screen at a time, but is more limited in moving in files.
See a few examples of displaying files with more.
locate, slocate
Locate reads one or more databases and writes file names matching patterns to output.
Finding files/directories efficiently with locate.

Like locate, slocate, or secure locate, provides a way to index and quickly search for files, but also securely stores file permissions and ownership so unauthorized users will be unable to view such files.
See an example of using slocate as a quick secure way to index files.
lft
Lft is similar to traceroute in determining connection routes, but gives a lot more information for debugging connections or finding where a box/system is. It displays route packets and file types.
More on displaying route packets with lft.
ln
The ln command creates new names for a file by hard linking, letting multiple users share one file.
Examples of hard linking files with ln.
A few more examples of using ln.
ls
The ls command lists files and directories within the current working directory, and admins can determine when configuration files were last edited.
The ls command is also discussed in this tip.
Examples of listing files and directories with ls.

man
Short for "manual," man allows a user to format and display the user manual built into Linux distributions, which documents commands and other aspects of the system.
The man command is also discussed in this tip.
See how to use the man command.
See examples of formatting man pages.
mc
A visual shell, text-based file manager for Unix systems.
An extensive guide to managing files with mc.
more
See less.

neat
Neat is a GNOME GUI admin tool which allows admins to specify information needed to set up a network card, among other features.
Setting up an NTL Cable Modem using neat.
Where neat falls in when building a network between Unix and Linux systems.
netconfig, netcfg
Netconfig configures a network, enables network products and displays a series of screens that ask for configuration information.
Configuring networks using Red Hat netcfg.
netstat
Netstat provides information and statistics about protocols in use and current TCP/IP network connections. A helpful forensic tool in figuring out which processes and programs are active on a computer and involved in networked communications.
More on checking network statuses with the netstat command.
nslookup
Nslookup allows a user to enter a host name and find the corresponding IP address. A reverse of the process to find the host name is also possible.
More from Microsoft on how to find IP addresses with nslookup.

od
Od is used to dump binary files in octal (or hex, binary) format to standard output.
Examples of dumping files with od.
More on od from IBM.

passwd
Passwd updates a user's authentication tokens (changes the current password).
Some IBM examples on changing passwords with passwd.
ping
Ping allows a user to verify that a particular IP address exists and can accept requests. Can be used to test connectivity and determine response time, and ensure that a host computer the user is trying to reach is actually operating.
Examples from IBM of using ping to verify IP addresses.
ps
Ps reports statuses of current processes in a system.
Some examples of using the ps command.
pwd
The pwd (print working directory) command displays the name of the current working directory. A basic Linux command.
Learn the differences between $ PATH and pwd.
Using pwd to print the current working directory.

read
Read is used to read lines of text from standard input and assign values of each field in the input line to shell variables for further processing.
Examples from IBM on using read.
RPM
Red Hat Package Manager (RPM) is a command-line driven program capable of installing, uninstalling and managing software packages in Linux.
A white paper on using RPM.
The Differences of yum and RPM.
Examples of installing packages with RPM.
rsync
Rsync synchs data from one disk or file to another across a network connection. Similar to rcp, but has more options.
A tip on backing up data with rsync.
How to use rsync to back up a directory in Linux.

screen
The GNU screen utility is a terminal multiplexor in which a user can use a single terminal window to run multiple terminal applications or windows.
A tutorial on running multiple windows and other uses of screen.
A tip on the uses of screen.
sdiff
Sdiff finds differences between two files by producing a side-by-side listing indicating lines that are different. It then merges the files and outputs results to outfile.
Example of contrasting files with sdiff.
More examples from IBM on the sdiff command.
sed
Sed is a stream editor that is used to filter text in a pipeline, distinguishing it from other editors. Sed takes text input and performs operation(s) on it and outputs the modified text. Typically used for extracting part of a file using pattern matching or substituting multiple occurrences of a string within a file.
More on extracting and replacing parts of a file with sed.
Several more examples from IBM on using sed for filtering.
shutdown
Shutdown is a command that turns off the computer and can be combined with variables such as -h for halt after shutdown or -r for reboot after shutdown.
Shut down or halt a computer with shutdown.
slocate
See locate.
Snort
Snort is an open source network intrusion detection system and packet sniffer that monitors network traffic, looking at each packet to detect dangerous payloads or suspicious anomalies. Based on libpcap.
Stopping hackers with Snort.
More from Red Hat on using Snort.
sort
Used to sort lines of text alphabetically or numerically according to fields; supports multiple sort keys.
Examples of sorting through lines of text with the sort command.
More examples of sort with multiple sort keys.
sudo
Sudo allows a system admin to give certain users the ability to run some (or all) commands at the root level, and logs all commands and arguments.
A tutorial on giving permissions to users with the sudo command.
SSH
SSH is a command interface used for securely gaining access to a remote computer, and is used by network admins to control servers remotely.
A comprehensive tutorial on secure access to remote computers with SSH.

tar
The tar program provides the ability to create archives from a number of specified files, or extract files from such an archive.
Examples of creating archives with tar.
TOP
TOP is a set of protocols for networks that perform distributed information processing in offices, and it displays the tasks on the system that take up the most memory. TOP can sort tasks by CPU usage, memory usage and runtime.
Monitoring system processes with TOP.
tr
Used to translate or delete characters from a text stream, writing to standard output, but does not accept file names as arguments -- only inputs from standard input.
Examples from IBM of translating characters with tr.
traceroute
Traceroute determines and records a route through the Internet between two computers and is useful for troubleshooting network/router issues. If the domain does not work or is not available, an IP can be tracerouted.
A tutorial on using traceroute to determine network issues.

uname
Uname displays the name current operating system, and can print information about the mentioned system.
Examples of viewing information on the current operating system with uname.
uniq
Uniq compares adjacent lines in a file, and removes/reports any duplicated lines. Removing duplicate lines with the uniq command.
A tip from IBM on removing redundant lines with uniq.

vi
Vi is a text editor that allows a user to control the system by solely using the keyboard instead of a combination of mouse selections and keystrokes.
An entire guide to using vi to easily control a system with the keyboard.
vmstat
Vmstat is used to get a snapshot of everything in a system, reporting information on such items as processes, memory, paging, and cpu activity. A good method for admins in determining where issues/slowdown in a system may be occurring.
How to keep an eye on Linux performance with vmstat and others.
Examples of viewing system memory usage with vmstat.

wc
wc counts the number of words, lines and characters of text files, and produces a count for multiple files if several files are selected.
More from IBM on displaying word counts with wc.
wget
Wget is a network utility that retrieves files from the Web supporting http, https and ftp protocols. It works non-interactively, in the background, while a user is logged off. Can create local versions of remote websites, re-creating directories of original sites.
Examples of creating mirror images of sites with wget.

Easier Methods to Get PerfectTech Resume for Entrepreneurs

“A powerful resume should leap off the page saying, ‘Me! I’m the one you want to hire!’” advises software engineer Gayle Laakmann McDowell in her book The Google Resume: How to Prepare for a Career and Land a Job at Apple, Microsoft, Google, or Any Top Tech Company. She says that every line in these documents should have value and contribute to convincing the employer to hire you. That said, below are 15 tips from McDowell and others on creating the perfect tech resume.
1. Focus on accomplishments: Focus less on your job duties in your last job and more on what you actually accomplished, with an emphasis on tangible results (increased app sales revenues by 20 percent, developed software that reduced costs by 10 percent, etc.).

2. Quantify results: Avoid saying general things like “improved customer satisfaction,” “increased company profits,” or “reduced number of bugs.” Instead, provide quantifiable metrics that demonstrate how your work helped your company save money, reduce costs, improve customer service, etc.
3. Target your resume: Gone are the days of sending one generic resume to hundreds of companies. You should target each resume to the specific job listing and company.
4. Don’t get too technical: Technical terms, sales and marketing slang, and acronyms that are commonly used at one company may be like a foreign language to recruiters or hiring managers at other companies. Make your resume universally understood by using industry-recognized terminology and explaining anything that recruiters might find confusing.
5. Be concise: We’ve all heard the stats about hiring managers tossing resumes that have just one typo. Although tech companies tend to be more forgiving, that’s no reason to submit a grammatically incorrect, misspelled, and otherwise poorly presented resume.
6. Be clear, and structure your resume well: Try to think like a recruiter when creating your resume. Provide the information recruiters want so that they don’t throw your resume in the trash pile. For example, if you worked as a software engineer at a top company such as Microsoft or Intel, stress the company name rather than your job title, since that will impress the recruiter the most.
7. Ditch the “objective”:  Use an Objective in your resume only if you are straight out of college or want to bring attention to the fact that you want to transition to a new role (for example, moving from a position in software engineering to one in sales). An Objective can also be a drawback because your stated job interest (mobile software developer) might convince the recruiter that you’re not interested in other lucrative and rewarding positions (user interface engineer, Web developer, etc.) he or she needs to fill.

8. Don’t be vague in your “summary”:  If you use a Summary section, be sure that it’s filled with key accomplishments (backed up by hard numbers), not vague pronouncements about your detail-oriented personality, strong work ethic, etc. Some people rename this section “Summary and Key Accomplishments.”

9. Think accomplishments over duties: Work experience is a key component of your resume, but it should not feature a comprehensive list of all the jobs that you’ve held (especially if you’ve worked in the industry for years or had many jobs). List the most important positions that will show the hiring manager that you’re qualified for the new job. Provide the largest amount of detail for your current or most recent job (or the one that is most applicable to showing that you’re qualified for the new position). Be sure to list your accomplishments, rather than just job duties. Again, think about what the hiring manager wants to see to convince him or her to call you in for an interview.

10. Minimize your “education” as you gain experience: Professional experience matters more than education in the tech industry, but it’s important that the Education section effectively conveys your educational background. If you have a nontraditional degree that recruiters may not be familiar with, be sure to offer a one- or two-sentence description of the major. Recent graduates should list their GPA only if it’s at least 3.0 on a 4.0 scale (of course, omitting your GPA may raise a red flag with the recruiter). Recent graduates should also list any college activities or awards that they believe will help them land the job, but they shouldn’t list everything they did while in school. Finally, the rule of thumb is that the Education section shrinks as you gain experience. Eventually, it will simply list the bare essentials such as university name, location, dates attended, degree earned, etc.
11. Don’t forget the skills: Tech workers should be sure to include a Skills section on their resume. This section should list software expertise, programming languages, foreign languages, and other applicable skills, but it’s a good idea to skip basic skills (such as Microsoft Word) that many applicants have. The key is to list skills that will help you land the job.

12. Go big, and keep the little for later: When considering what to include on your resume, focus on the “big,” and save the “little” for the job interview. This means you should detail big, eye-catching accomplishments such as new products and technologies that you helped develop, major employers (such as Google or Amazon) that you worked for, major customers that you interacted with, and increases in sales, profits, or productivity that you contributed to. Be ready to provide the details regarding these accomplishments and background information during the actual interview.
13. Use keywords: At its employment web site, Microsoft advises applicants to detail on their resume how their experiences (leadership roles, work duties, school activities, etc.) helped them to grow as a person and as a professional. This is a good approach, since you always want to show that you are evolving as a person and eager to learn new skills. Also, use keywords that match those listed in the job announcement. For example, if you’re applying for a position in e-marketing and search engine optimization, then your resume should include these terms. This will help you get noticed by resume-scanning software and advance past the first screening stage.
14. Use your name: If you send your resume as an attachment, don’t name it “resume.doc” or “resume.pdf.” That’s the surest way for your resume to get lost among the thousands of other submissions. Instead, name the file starting with your last name, then your first name, then the date. And add the job identification number if one is available.
15. Use tools and follow the directions: Some companies such as Microsoft offer resume-building tools for job applicants at their web sites. These tools will help you determine what you should and should not include in your resume. Be sure to use these tools, if offered. And follow instructions to the letter. Google, for example, requires applicants to submit their resumes in PDF, Microsoft Word, or text formats. It also requires that all application materials for U.S. jobs be submitted in English.