NGINX Load Balancer – Proxy for Rancher

We needed an NGINX proxy to handle SSL offload for docker containers.  I wanted to use NGINX proxy in Rancher so I built the docker and rancher compose files and built the folder structure so that I could add this to my Rancher Catalog.  I use Gitlab as my hose for the git repository and I have that linked to my Rancher Server under Admin > Settings.

Prepare Folders and Files

nginx-folders

Contents of the Config.yml

name: Nginx Proxy
description: |
  Created by Laurie Kepford.  This will pick up its config from /efs/data/nginx/sites-enabled/
version: 1
category: Load Balancing
maintainer: Laurie Kepford
license: Free
projectURL: # A URL related to the catalog entry

Contents of docker-compose.yml

nginx-pub:
 ports:
 - 443:8443/tcp
 labels:
   io.rancher.container.pull_image: always
   io.rancher.scheduler.affinity:host_label: name=rancherpool-pub  ##This ensures that my nginx container will run on the servers  I have labeled as being public.
 image: nginx
 Volumes:  ##The purpose and contents of these folder are explained below
 - /efs/data/nginx/sites-enabled:/etc/nginx/conf.d:ro
 - /efs/data/Certificates:/etc/nginx/certs:ro
 - /efs/data/nginx/ssl/dhparam.pem:/etc/ssl/certs/dhparam.pem

Contents of rancher-compose.yml

.catalog:
 name: "NGINX Proxy"
 version: "1"
 description: "NGINX Latest"
 uuid: "nginx-0"
 minumum_rancher_version: "v1.0"
nginx-pub:
 scale: 2

Files To Be Loaded into the Container

In order for the container to work several files will be loaded into the container at run time.  I am using an Amazon Elastic Files System share that is pre-mounted to all of my Rancher Host Servers.  This way, I don’t have to use side kick data containers and all of my data get backed up to S3 hourly.

Sites Enabled

- /efs/data/nginx/sites-enabled:/etc/nginx/conf.d:ro

This line, in my docker-compose.yml file will load my nginx conf in read only mode I have placed the require file in the indicated folder.  I have only one site so I named the file default.conf. Here are the contents of that file:

server {

   listen 8443;
   server_name targetserver.domain.com;
   ssl on;
   ssl_certificate "/etc/nginx/certs/domain.full.chain.crt";
   ssl_certificate_key "/etc/nginx/certs/myserver.key";
   ssl_session_cache  builtin:1000  shared:SSL:10m;
   ssl_session_timeout 10m;
   ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
   ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4';
   ssl_prefer_server_ciphers on;
   ssl_dhparam /etc/ssl/certs/dhparam.pem;  ##most likely this file will not exist and you will need to upload it. See below.

   resolver 10.30.0.2;
   set $upstream_endpoint https://stack.container.environment.domain.com:8080;

   location / {
       proxy_pass $upstream_endpoint;
       proxy_http_version 1.1;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection 'upgrade';
       proxy_set_header Host $host;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_cache_bypass $http_upgrade;
   }
}

Certificates

- /efs/data/Certificates:/etc/nginx/certs:ro

This line loads my certificate files into the container at /etc/nginx/certs

Dhparam.pem

- /efs/data/nginx/ssl/dhparam.pem:/etc/ssl/certs/dhparam.pem

This line loads the dhparam.pem certificate into NGINX.  I am not sure why it’s needed, but it kept complaining about it.  The contents of that file is:

-----BEGIN DH PARAMETERS-----
MIICCAKCAgEA0cs0b734lSFxL4IwV0wfmHeEVXBxpfX5uII5Wn3PhcPk91e9yeUN
H6Oae3bx8GFm3BXGsYRIdDvjHgVJv950JFivM2x2h9028maN7p6gETp8L85ov0jq
+LZLpV0SpXPt8cM6zgn69KdktNfWWFcqnAGhYXaYHhJJjasApe+ODvzfhmgJ5eZd
3ouY8I08RqzOiVd+GQLhjpdi2xuQZSW2e4w0FuBGfLNnUKKiWqcONC8aToEWUM17
6xwXcT8fnlqHzqLstcszh6duwsuP/s7F96W2VpiQpTDfrHvs54RiO7+DH/KLx/mc
hEHcLjKn25fFKWBi+UXioTYDye0AJeaA6sGvgpuipdohUpA1ftFPhQaS+i6rVeWD
G8i1/q4//a6lxdyzpAji8tOhlriu2n0yrJTb7ntd5ct1gZ4xC5DRsWvme5Ihlh1L
0uDzhwqf6VXbceqVwcaYzYrrdyYWKUyMPQKPqxrY88CpofXhmF6dTYaJFj4R9ofg
DnuHABbxr45io4Cdxk4W9PFypuHblz5l0Dmvr5Cu9WoxOCNiojwxP4X8Fg8YmcLH
oFIMT5T8C7tcaLtWa6jkg536QozxcRTvj90r9/V5l8ZvHJYu9PnqdMM7TtOtTdQv
ocAQGvzDjEkh1DpLHv+3KVCSkktzvbBoCvnHLkUwvnmFpq+ZbVOhirsCAQI=
-----END DH PARAMETERS-----

I had to create this file.

Push to GitLab

git add .
git commit -m "Nginx"
git push origin master

Launch from Catalog

Once I did my push, I refreshed my catalog and the new listing was visible.

nginx-proxy

Behind The Scenes Magic

There are some other components that have running in my rancher environment that make this work.  One of them is the EFS files system that I mentioned earlier.  The other two are Jobber which does the backups to S3 and the other this Rancher’s Route53 DNS service.

Jobber is a docker container that runs a cron job every hour and backs up all the data on EFS to S3.  It uses the AWS CLI. Link to jobber github project.

Rancher’s Route53 DNS is really very key to making this work.  It creates a DNS A record for any container that has an exposed port. When you set this up make sure you set the TTL to a low number, like my 60 seconds.  The DNS name it generates is based on the stack, the container name and the Rancher Environment name.  

There are two lines in my default.conf file that reads like this:

resolver 10.30.0.2;

  set $upstream_endpoint https://stack.container.environment.domain.com:8080;

There are two kinds of magic happening here. That DNS name is automatically generated and updated any time the container launches.  This DNS name refers to the container that is running my target app.  The resolver line (resolver 10.30.0.2;) is where you put in the DNS server you want Nginx to use to resolve the address.  Without this line Nginx will not lookup the DNS name before it resolves, which will most likely cause it to send your web traffic to an old ip address.  For information on why this works see this post.

Conclusion

I realize there are other ways to accomplish this scenario.  I am not an expert on NGINX but it did learn a lot during this exercise. I could have added some variables to the docker and rancher compose, etc. But this worked for me.

 

Follow by Email
Facebook
Google+
http://cloudlady911.com/index.php/2016/10/20/nginx-load-balancer-proxy-for-rancher/
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *