NGINX Load Balancer – Proxy for Rancher

We needed an NGINX proxy to handle SSL offload for docker containers.  I wanted to use NGINX proxy in Rancher so I built the docker and rancher compose files and built the folder structure so that I could add this to my Rancher Catalog.  I use Gitlab as my hose for the git repository and I have that linked to my Rancher Server under Admin > Settings.

Prepare Folders and Files


Contents of the Config.yml

name: Nginx Proxy
description: |
  Created by Laurie Kepford.  This will pick up its config from /efs/data/nginx/sites-enabled/
version: 1
category: Load Balancing
maintainer: Laurie Kepford
license: Free
projectURL: # A URL related to the catalog entry

Contents of docker-compose.yml

 - 443:8443/tcp
   io.rancher.container.pull_image: always
   io.rancher.scheduler.affinity:host_label: name=rancherpool-pub  ##This ensures that my nginx container will run on the servers  I have labeled as being public.
 image: nginx
 Volumes:  ##The purpose and contents of these folder are explained below
 - /efs/data/nginx/sites-enabled:/etc/nginx/conf.d:ro
 - /efs/data/Certificates:/etc/nginx/certs:ro
 - /efs/data/nginx/ssl/dhparam.pem:/etc/ssl/certs/dhparam.pem

Contents of rancher-compose.yml

 name: "NGINX Proxy"
 version: "1"
 description: "NGINX Latest"
 uuid: "nginx-0"
 minumum_rancher_version: "v1.0"
 scale: 2

Files To Be Loaded into the Container

In order for the container to work several files will be loaded into the container at run time.  I am using an Amazon Elastic Files System share that is pre-mounted to all of my Rancher Host Servers.  This way, I don’t have to use side kick data containers and all of my data get backed up to S3 hourly.

Sites Enabled

- /efs/data/nginx/sites-enabled:/etc/nginx/conf.d:ro

This line, in my docker-compose.yml file will load my nginx conf in read only mode I have placed the require file in the indicated folder.  I have only one site so I named the file default.conf. Here are the contents of that file:

server {

   listen 8443;
   ssl on;
   ssl_certificate "/etc/nginx/certs/domain.full.chain.crt";
   ssl_certificate_key "/etc/nginx/certs/myserver.key";
   ssl_session_cache  builtin:1000  shared:SSL:10m;
   ssl_session_timeout 10m;
   ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
   ssl_prefer_server_ciphers on;
   ssl_dhparam /etc/ssl/certs/dhparam.pem;  ##most likely this file will not exist and you will need to upload it. See below.

   set $upstream_endpoint;

   location / {
       proxy_pass $upstream_endpoint;
       proxy_http_version 1.1;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection 'upgrade';
       proxy_set_header Host $host;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_cache_bypass $http_upgrade;


- /efs/data/Certificates:/etc/nginx/certs:ro

This line loads my certificate files into the container at /etc/nginx/certs


- /efs/data/nginx/ssl/dhparam.pem:/etc/ssl/certs/dhparam.pem

This line loads the dhparam.pem certificate into NGINX.  I am not sure why it’s needed, but it kept complaining about it.  The contents of that file is:


I had to create this file.

Push to GitLab

git add .
git commit -m "Nginx"
git push origin master

Launch from Catalog

Once I did my push, I refreshed my catalog and the new listing was visible.


Behind The Scenes Magic

There are some other components that have running in my rancher environment that make this work.  One of them is the EFS files system that I mentioned earlier.  The other two are Jobber which does the backups to S3 and the other this Rancher’s Route53 DNS service.

Jobber is a docker container that runs a cron job every hour and backs up all the data on EFS to S3.  It uses the AWS CLI. Link to jobber github project.

Rancher’s Route53 DNS is really very key to making this work.  It creates a DNS A record for any container that has an exposed port. When you set this up make sure you set the TTL to a low number, like my 60 seconds.  The DNS name it generates is based on the stack, the container name and the Rancher Environment name.  

There are two lines in my default.conf file that reads like this:


  set $upstream_endpoint;

There are two kinds of magic happening here. That DNS name is automatically generated and updated any time the container launches.  This DNS name refers to the container that is running my target app.  The resolver line (resolver;) is where you put in the DNS server you want Nginx to use to resolve the address.  Without this line Nginx will not lookup the DNS name before it resolves, which will most likely cause it to send your web traffic to an old ip address.  For information on why this works see this post.


I realize there are other ways to accomplish this scenario.  I am not an expert on NGINX but it did learn a lot during this exercise. I could have added some variables to the docker and rancher compose, etc. But this worked for me.


Follow by Email

Leave a Reply

Your email address will not be published. Required fields are marked *