Serving QueueMetrics through a NGINX proxy

You may want to serve QueueMetrics through a NGINX front-end. This is often a win because:

  • You can run multiple distinct services on the same virtual host.

  • It is very easy to set up SSL and to secure the instance for the public internet.

  • You can have flexible caching of static resources.

  • You have a simple choke point to trace all requests going through, and can easily implement more complex configuration.

  • You do not need to learn and edit Tomcat’s own configuration.

It is trivial to set up a proxy; QueueMetrics requires a number of headers so that it can reliably rebuild both an "internal" URL and a public-facing URL as needed.

We will show an example where your QueueMetrics is located at http://127.0.0.1:8080/queuemetrics/ and your server name is qm.myserver.com.

Nginx will listen on ports 80, 443. If you have anything listen on these ports, you need to adapt the configuration

Prerequisites

  • A working QueueMetrics instance

  • Nginx version 1.12+

NGINX configuration

How your '/etc/nginx/nginx.conf' file should look:

user nginx nginx;
worker_processes auto;

error_log /var/log/nginx/error.log info;

pid /run/nginx.pid;

events {
    worker_connections 1024;
    use epoll;
}

http {
        include /etc/nginx/mime.types;
        default_type application/octet-stream;
        log_format main
                '$remote_addr - $remote_user [$time_local] '
                '"$request" $status $bytes_sent '
                '"$http_referer" "$http_user_agent" '
                '"$gzip_ratio"';
        client_header_timeout 10m;
        client_body_timeout 10m;
        send_timeout 10m;
        connection_pool_size 256;
        client_header_buffer_size 1k;
        request_pool_size 4k;
        gzip off;
        output_buffers 1 32k;
        postpone_output 1460;
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 75 20;
        ignore_invalid_headers on;
        large_client_header_buffers 4 128k;
        server_tokens off;

	include /etc/nginx/conf.d/queuemetrics_http.conf;
        include /etc/nginx/conf.d/queuemetrics_https.conf;
}

How your '/etc/nginx/conf.d/queuemetrics_http.conf' should look:

server {
	listen 80;
	server_name qm.myserver.com;

	access_log /var/log/nginx/qm.myserver.com.access_log main;
	error_log /var/log/nginx/qm.myserver.com.error_log info;
	root /var/www;

	# IF YOU WANT TO FORCE SSL, UNCOMMENT BELOW
	# rewrite ^ https://$server_name$request_uri? permanent;

	# you don't want your queuemetrics to be indexed on google searches...
	add_header X-Robots-Tag "noindex";

	location /queuemetrics {
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-Port $server_port;
		proxy_set_header X-Forwarded-Proto $scheme;
		proxy_set_header X-Forwarded-Server $host;
		proxy_set_header X-Forwarded-Webapp /queuemetrics;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Url-Scheme $scheme;
		proxy_pass http://127.0.0.1:8080/queuemetrics;
		proxy_read_timeout 240s;
	}
}

How your '/etc/nginx/conf.d/queuemetrics_https.conf' should look:

server {
	listen 443 ssl http2;
	server_name qm.myserver.com;

	access_log /var/log/nginx/qm.myserver.com.access_log main;
	error_log /var/log/nginx/qm.myserver.com.error_log info;

	ssl on;
	ssl_certificate /etc/ssl/nginx/qm.myserver.com.crt;
	ssl_certificate_key /etc/ssl/nginx/qm.myserver.com.key;
	ssl_dhparam /etc/ssl/nginx/dhparam;

	ssl_protocols TLSv1.2;
	ssl_prefer_server_ciphers on;
	ssl_ciphers "EECDH+aRSA+AESGCM !EECDH+ECDSA+AESGCM !EECDH+ECDSA+SHA384 !EECDH+ECDSA+SHA256 !EECDH+aRSA+SHA384 !EECDH+aRSA+SHA256 !EDH+aRSA !aNULL !eNULL !LOW !MD5 !EXP !PSK !SRP !DSS !RC4 !EECDH+aRSA+RC4 !3DES";
	ssl_session_cache shared:SSL:50m;
	ssl_session_timeout 5m;
	add_header Strict-Transport-Security "max-age=31536000; preload" always;

	root /var/www;

	# you don't want your queuemetrics to be indexed on google searches...
	add_header X-Robots-Tag "noindex";

	location /queuemetrics {
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-Port $server_port;
		proxy_set_header X-Forwarded-Proto $scheme;
		proxy_set_header X-Forwarded-Server $host;
		proxy_set_header X-Forwarded-Ssl on;
		proxy_set_header X-Forwarded-Webapp /queuemetrics;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Url-Scheme $scheme;
		proxy_pass http://127.0.0.1:8080/queuemetrics;
		proxy_read_timeout 240s;
	}
}

The file '/etc/ssl/nginx/dhparam' can be generated with:

openssl dhparam -dsaparam -out /etc/ssl/nginx/dhparam 4096

Make sure that the directory '/etc/ssl/nginx' exists before to issue the command

Self-signed ssl certificate

An alternative to buying an ssl certificate or to obtaining it from Letsencrypt, is to make your own self-signed certificate. Since browsers like Google Chrome will complain about the Subject Alternative Name, we will generate a certificate in a different way than you might find over the web.

You need to create a file called 'config.ssl' in whatever place you are confortable, because this is a temporary file.

How your 'config.ssl' should look:

[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no

[req_distinguished_name]
C = CH
ST = Switzerland
L = Stabio
O = queuemetrics self-signed
OU = queuemetrics self-signed
CN = qm.myserver.com

[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1 = qm.myserver.com
DNS.2 = www.qm.myserver.com

How to issue the certificate:

openssl req \
		-x509 \
		-sha512 \
		-nodes \
		-days 3650 \
		-newkey rsa:4096 \
		-keyout /etc/ssl/nginx/qm.myserver.com.key \
		-out /etc/ssl/nginx/qm.myserver.com.crt \
		-config config.ssl

Be sure that you are issuing the command in the same directory where there is yout 'config.ssl' You need to restart nginx to apply the changes you’ve done in the configuration file.

Redirect the requests from Tomcat to Nginx

Tomcat will still listen on port 8080 and it is still reachable, so at this point you may want to redirect the requests to nginx. It is also useful if you want to maintain the compatibility between the links that each client has stored in its bookmark and the new url provided by Nginx.

The 'iptables PREROUTING chain' will help you to do the job with the following rule:

iptables -t nat -A PREROUTING -p tcp --dport 8080 -j REDIRECT --to-port 80

Troubleshooting

If you go to 'System diagnostic tools' then 'View Configuration' and then open the menu for 'HTTP Configuration', you will be able to see:

  • 'C: localWebappUrl' is the "inner" address where Tomcat is serving the webapp - eg. http://1.2.3.4:8080/queuemetrics

  • 'Q: getPublicQmBaseUrl' is the "public" address that QueueMetrics is being server from, e.g. https://qm.myserver.com/queuemetrics

  • 'C: realRemoteIP' is the IP address of your client, as forwarded by the proxy

  • 'Header: …​' records show the HTTP headers that come with the request, both coming from your browser and the proxy.

Further developments

  • If you need the webapp to work under a different assumed name, you need to rewrite cookies so that QueueMetrics can place them with the correct path

  • You may want to cache static resources, so they are served directly by Nginx.

  • By default, we are providing a configuration with 'gzip off' because of the BREACH Vulnerability (http://breachattack.com/#howitworks). You may want to add the gzip compression at your own risk.