Nginx Reverse Proxying by Domain
[Divide and Conquer]
For a while now I’ve been using a home server to host a couple websites, determining which of the sites will load depending on which domain/subdomain is being requested. However, with all of the sites being hosted on the same machine, taking down one site for maintenance would result in taking down the rest as well. This week, I decided to fix this, by separating the various sites onto different systems. That way, if I need to take down one server for maintenance, the rest can stay online.
In order to accomplish this task, I decided to set up a primary system as my network’s default host, so that any incoming traffic that connects to my home IP will be routed to that host. This host server would then use nginx as a reverse proxy to route traffic wherever it needed to go, differentiating between internal destinations based on the domains requested by external clients.
I could have simply used my router’s port-forwarding option, but this wouldn’t work for my purposes because port 80 and 443 could only be forwarded to a single host, and I wish to have multiple web hosts on the same IP, with their traffic forwarded dynamically based on the requested domain name.
This article will quickly describe the method I went through to accomplish this task. It will be a particularly short article, since I’m writing this primarily as a reference for myself, should I ever need to go back and do this all again. However, as always, I hope that this article will be useful to others as well.
Layout
For the purposes of this guide, I’ll refer to three separate systems, each with three different internal IPs:
- Default Host: 10.10.10.1
- Sub Host A: 10.10.10.2
- Sub Host B: 10.10.10.3
I’ve also got two different subdomains, both of which are pointing at my home’s external IP:
- a.mydomain.com
- b.mydomain.com
In your network, you could use full TLDs instead of using subdomains, if you wished. So, for example, if you had domain1.com and domain2.com, both pointing at the same IP, you could use this technique to route their traffic in the same way as I will be routing by subdomains.
The goal is to set up the network so that incoming connections to either of the subdomains will be routed through Default Host
, which will direct the traffic to either Sub Host A
or Sub Host B
depending on which of the subdomains are being requested by the client.
Setup
For my Default Host
, I’m using a Raspberry Pi. This machine is running Raspbian Stretch. However, the version of nginx packaged with Stretch is only v1.10, and for my purposes I need a more recent version. Fortunately, at the time of this writing, the Raspbian Buster repositories are available for use, despite Buster having not yet been released. Buster’s nginx is v1.14, which will work just fine for our purposes.
To pull from the Buster repositories, I simply sudo vi /etc/apt/sources.list
and change stretch
to buster
, like so:
deb http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
# Uncomment line below then 'apt-get update' to enable 'apt-get source'
#deb-src http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
After this, I perform a sudo apt update
followed by sudo apt install nginx -y
. (In my case, I also performed a sudo apt upgrade -y
, which took a long time, and may not be adviseable just yet, considering that Buster still hasn’t been released.) Once everything is said and done, I’ve got a working nginx v1.14 installation on my Raspbian machine.
Configuration
Throughout this tutorial, I use the vi
text editor. This is because vi
is most familiar to me, but you are welcome to use whatever editor you wish.
In order to route all traffic to the right machines, I’ll need to alter the /etc/nginx/nginx.conf
file. But before I do that, I want to ensure that all connections to port 80 are redirected to port 443 so that all clients visiting websites on this domain are routed to the HTTPS versions of each page. I’ve already set up the SSL certificates on Sub Host A
and Sub Host B
, so that the Default Host
doesn’t have to store any certificates locally. These certificates were obtained via LetsEncrypt prior to this network layout reconfiguration, but if I need to get new certificates (for example, if I buy a new domain name) I can do so after the reconfiguration is complete.
To route all traffic from port 80 to port 443, I will need to alter the /etc/nginx/sites-available/default
file. First, in case I wish to roll-back my changes, I make a back-up of the file:
user@host:~ $ cd /etc/nginx/sites-available/
user@host:/etc/nginx/sites-available $ sudo cp default default.old
This ensures that if I screw something up, I can revert the change by copying default.old
over the default
file.
Now I need to wipe the default
file and start a fresh one. To do this, I type sudo rm default && sudo vi default
and hit enter, then type the following into the default
file:
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
This configuration tells nginx to reroute all incoming unencrypted HTTP connections to their respective HTTPS equivalents. Once I’ve finished editing the default
file, I save and exit the editor.
Next, I’ll need to set up the /etc/nginx/nginx.conf
file. To do this, I cd /etc/nginx
, then sudo cp nginx.conf nginx.conf.old
to create a back-up in case I need to restore the original file. Next, I use sudo vi nginx.conf
to edit the file. I scroll down to the end of the http
section, then add the following:
stream {
# Dynamic proxy HTTPS connections.
map $ssl_preread_server_name $name {
a.mydomain.com a_backend;
b.mydomain.com b_backend;
}
upstream a_backend {
server 10.10.10.2;
}
upstream b_backend {
server 10.10.10.3;
}
server {
listen 443;
listen [::]:443;
proxy_pass $name;
ssl_preread on;
}
}
This section accomplishes three things. First, in the map
subsection, it assigns a different backend to each incoming subdomain. Second, in the upstream
subsections, it tells nginx where to find the target server of each backend. Finally, in the server
subsection, it tells nginx to listen to port 443 (the HTTPS port) on both IPv4 and IPv6 and proxy the traffic to whichever backend server is attached to the requested subdomain. In this manner, anyone connecting to a.mydomain.com
or b.mydomain.com
will connect to the Default Host
, which will determine their requested subdomain and transparently proxy their connection to the appropriate internal server.
It is important to note that all of this is wrapped up in a stream
section. The stream
declaration tells nginx to simply create a direct TCP stream proxy between the client and the respective internal hosts, rather than act as an HTTPS server. This allows the internal hosts to act as their own servers and manage their own SSL keys without the Default Host
having to manage any keys.
Enabling Remote Administration
At this point, I could restart nginx and traffic would be proxied as expected. However, I still have one more problem to tackle: administration of the internal hosts. If I’m on the road (which I often am) and I need to update one of my internal websites, I need to be able to connect to that internal host. To do this, I prefer to use SSH. However, with the current configuration, the Default Host
is the only host to which I can connect via SSH, meaning that I’d have to make a secondary connection from the Default Host
to one of the internal hosts.
While it’s not much trouble to make two SSH connections, I’d rather be able to connect to the internal hosts without having to SSH into the Default Host
first. Fortunately, I can add some port-forwarding declarations to the nginx.conf
file, within the same stream
block that I used to proxy the HTTPS traffic. To do this, I simply sudo vi nginx.conf
and add the following lines to the stream
block:
# General port-forwarding.
server {
listen 2201;
listen [::]:2201;
proxy_pass 10.10.10.1:22;
}
server {
listen 2202;
listen [::]:2202;
proxy_pass 10.10.10.2:22;
}
These added server
subsections tell nginx to forward connections made to ports 2201 and 2202 to the SSH ports of Sub Host A
and Sub Host B
, respectively. (I will use port 2201 and 2202 because it’s easy to remember that they point to port 22 on machine 01 and port 22 on machine 02, respectively. However, you can set up these ports however you want.) Once I’ve finished editing the document, the stream
section should look like this:
stream {
# Dynamic proxy HTTPS connections.
map $ssl_preread_server_name $name {
a.mydomain.com a_backend;
b.mydomain.com b_backend;
}
upstream a_backend {
server 10.10.10.2;
}
upstream b_backend {
server 10.10.10.3;
}
server {
listen 443;
listen [::]:443;
proxy_pass $name;
ssl_preread on;
}
# General port-forwarding.
server {
listen 2201;
listen [::]:2201;
proxy_pass 10.10.10.1:22;
}
server {
listen 2202;
listen [::]:2202;
proxy_pass 10.10.10.2:22;
}
}
Once this configuration is complete, I write the changes to disk and close the file. Then, I restart the nginx service with sudo service nginx restart
. If no errors are reported, everything appears to have been updated successfully. Now, all I have to do is test the configuration by visiting each of the two subdomains and by attempting to SSH into port 2201 and 2202. If everything checks out, then we’re done!
Future Expansion
Adding new subdomains or forwarding additional ports is quite simple. If I’m going to add a new domain to point to a different host, I can simply add the domain to the map
subsection, create a new upstream
section for it, and add a new server
entry to point to that machine’s SSH port. And if I need to proxy a port for any other reason (for example, to enable access to RDP on an internal machine), I can simply forward that port in the same manner as the SSH ports.
Conclusion
That’s all there is to it! I’ve successfully configured nginx as a transparent proxy so that I can provide access to multiple internal systems via a single external IP.
An interesting note: Should someone port-scan my IP, they would see ports 22, 80, and 443 open, but they’d also see 2201 and 2202, and 3389 if I enable RDP on a Windows machine. Each port that I open could be pointing to a different system, and should the scanner grab banners, they might be presented with a bounty of strange information. At first it might appear as if I’ve got one system providing banners for both Linux and Windows services. But a clever attacker might recognize that I’ve enabled port-forwarding, and from this, they would be able to determine that I’ve got both Linux and Windows systems within my network.
Read other posts
[Divide and Conquer]
For a while now I’ve been using a home server to host a couple websites, determining which of the sites will load depending on which domain/subdomain is being requested. However, with all of the sites being hosted on the same machine, taking down one site for maintenance would result in taking down the rest as well. This week, I decided to fix this, by separating the various sites onto different systems. That way, if I need to take down one server for maintenance, the rest can stay online.
In order to accomplish this task, I decided to set up a primary system as my network’s default host, so that any incoming traffic that connects to my home IP will be routed to that host. This host server would then use nginx as a reverse proxy to route traffic wherever it needed to go, differentiating between internal destinations based on the domains requested by external clients.
I could have simply used my router’s port-forwarding option, but this wouldn’t work for my purposes because port 80 and 443 could only be forwarded to a single host, and I wish to have multiple web hosts on the same IP, with their traffic forwarded dynamically based on the requested domain name.
This article will quickly describe the method I went through to accomplish this task. It will be a particularly short article, since I’m writing this primarily as a reference for myself, should I ever need to go back and do this all again. However, as always, I hope that this article will be useful to others as well.
Layout
For the purposes of this guide, I’ll refer to three separate systems, each with three different internal IPs:
- Default Host: 10.10.10.1
- Sub Host A: 10.10.10.2
- Sub Host B: 10.10.10.3
I’ve also got two different subdomains, both of which are pointing at my home’s external IP:
- a.mydomain.com
- b.mydomain.com
In your network, you could use full TLDs instead of using subdomains, if you wished. So, for example, if you had domain1.com and domain2.com, both pointing at the same IP, you could use this technique to route their traffic in the same way as I will be routing by subdomains.
The goal is to set up the network so that incoming connections to either of the subdomains will be routed through Default Host
, which will direct the traffic to either Sub Host A
or Sub Host B
depending on which of the subdomains are being requested by the client.
Setup
For my Default Host
, I’m using a Raspberry Pi. This machine is running Raspbian Stretch. However, the version of nginx packaged with Stretch is only v1.10, and for my purposes I need a more recent version. Fortunately, at the time of this writing, the Raspbian Buster repositories are available for use, despite Buster having not yet been released. Buster’s nginx is v1.14, which will work just fine for our purposes.
To pull from the Buster repositories, I simply sudo vi /etc/apt/sources.list
and change stretch
to buster
, like so:
deb http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
# Uncomment line below then 'apt-get update' to enable 'apt-get source'
#deb-src http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
After this, I perform a sudo apt update
followed by sudo apt install nginx -y
. (In my case, I also performed a sudo apt upgrade -y
, which took a long time, and may not be adviseable just yet, considering that Buster still hasn’t been released.) Once everything is said and done, I’ve got a working nginx v1.14 installation on my Raspbian machine.
Configuration
Throughout this tutorial, I use the vi
text editor. This is because vi
is most familiar to me, but you are welcome to use whatever editor you wish.
In order to route all traffic to the right machines, I’ll need to alter the /etc/nginx/nginx.conf
file. But before I do that, I want to ensure that all connections to port 80 are redirected to port 443 so that all clients visiting websites on this domain are routed to the HTTPS versions of each page. I’ve already set up the SSL certificates on Sub Host A
and Sub Host B
, so that the Default Host
doesn’t have to store any certificates locally. These certificates were obtained via LetsEncrypt prior to this network layout reconfiguration, but if I need to get new certificates (for example, if I buy a new domain name) I can do so after the reconfiguration is complete.
To route all traffic from port 80 to port 443, I will need to alter the /etc/nginx/sites-available/default
file. First, in case I wish to roll-back my changes, I make a back-up of the file:
user@host:~ $ cd /etc/nginx/sites-available/
user@host:/etc/nginx/sites-available $ sudo cp default default.old
This ensures that if I screw something up, I can revert the change by copying default.old
over the default
file.
Now I need to wipe the default
file and start a fresh one. To do this, I type sudo rm default && sudo vi default
and hit enter, then type the following into the default
file:
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
This configuration tells nginx to reroute all incoming unencrypted HTTP connections to their respective HTTPS equivalents. Once I’ve finished editing the default
file, I save and exit the editor.
Next, I’ll need to set up the /etc/nginx/nginx.conf
file. To do this, I cd /etc/nginx
, then sudo cp nginx.conf nginx.conf.old
to create a back-up in case I need to restore the original file. Next, I use sudo vi nginx.conf
to edit the file. I scroll down to the end of the http
section, then add the following:
stream {
# Dynamic proxy HTTPS connections.
map $ssl_preread_server_name $name {
a.mydomain.com a_backend;
b.mydomain.com b_backend;
}
upstream a_backend {
server 10.10.10.2;
}
upstream b_backend {
server 10.10.10.3;
}
server {
listen 443;
listen [::]:443;
proxy_pass $name;
ssl_preread on;
}
}
This section accomplishes three things. First, in the map
subsection, it assigns a different backend to each incoming subdomain. Second, in the upstream
subsections, it tells nginx where to find the target server of each backend. Finally, in the server
subsection, it tells nginx to listen to port 443 (the HTTPS port) on both IPv4 and IPv6 and proxy the traffic to whichever backend server is attached to the requested subdomain. In this manner, anyone connecting to a.mydomain.com
or b.mydomain.com
will connect to the Default Host
, which will determine their requested subdomain and transparently proxy their connection to the appropriate internal server.
It is important to note that all of this is wrapped up in a stream
section. The stream
declaration tells nginx to simply create a direct TCP stream proxy between the client and the respective internal hosts, rather than act as an HTTPS server. This allows the internal hosts to act as their own servers and manage their own SSL keys without the Default Host
having to manage any keys.
Enabling Remote Administration
At this point, I could restart nginx and traffic would be proxied as expected. However, I still have one more problem to tackle: administration of the internal hosts. If I’m on the road (which I often am) and I need to update one of my internal websites, I need to be able to connect to that internal host. To do this, I prefer to use SSH. However, with the current configuration, the Default Host
is the only host to which I can connect via SSH, meaning that I’d have to make a secondary connection from the Default Host
to one of the internal hosts.
While it’s not much trouble to make two SSH connections, I’d rather be able to connect to the internal hosts without having to SSH into the Default Host
first. Fortunately, I can add some port-forwarding declarations to the nginx.conf
file, within the same stream
block that I used to proxy the HTTPS traffic. To do this, I simply sudo vi nginx.conf
and add the following lines to the stream
block:
# General port-forwarding.
server {
listen 2201;
listen [::]:2201;
proxy_pass 10.10.10.1:22;
}
server {
listen 2202;
listen [::]:2202;
proxy_pass 10.10.10.2:22;
}
These added server
subsections tell nginx to forward connections made to ports 2201 and 2202 to the SSH ports of Sub Host A
and Sub Host B
, respectively. (I will use port 2201 and 2202 because it’s easy to remember that they point to port 22 on machine 01 and port 22 on machine 02, respectively. However, you can set up these ports however you want.) Once I’ve finished editing the document, the stream
section should look like this:
stream {
# Dynamic proxy HTTPS connections.
map $ssl_preread_server_name $name {
a.mydomain.com a_backend;
b.mydomain.com b_backend;
}
upstream a_backend {
server 10.10.10.2;
}
upstream b_backend {
server 10.10.10.3;
}
server {
listen 443;
listen [::]:443;
proxy_pass $name;
ssl_preread on;
}
# General port-forwarding.
server {
listen 2201;
listen [::]:2201;
proxy_pass 10.10.10.1:22;
}
server {
listen 2202;
listen [::]:2202;
proxy_pass 10.10.10.2:22;
}
}
Once this configuration is complete, I write the changes to disk and close the file. Then, I restart the nginx service with sudo service nginx restart
. If no errors are reported, everything appears to have been updated successfully. Now, all I have to do is test the configuration by visiting each of the two subdomains and by attempting to SSH into port 2201 and 2202. If everything checks out, then we’re done!
Future Expansion
Adding new subdomains or forwarding additional ports is quite simple. If I’m going to add a new domain to point to a different host, I can simply add the domain to the map
subsection, create a new upstream
section for it, and add a new server
entry to point to that machine’s SSH port. And if I need to proxy a port for any other reason (for example, to enable access to RDP on an internal machine), I can simply forward that port in the same manner as the SSH ports.
Conclusion
That’s all there is to it! I’ve successfully configured nginx as a transparent proxy so that I can provide access to multiple internal systems via a single external IP.
An interesting note: Should someone port-scan my IP, they would see ports 22, 80, and 443 open, but they’d also see 2201 and 2202, and 3389 if I enable RDP on a Windows machine. Each port that I open could be pointing to a different system, and should the scanner grab banners, they might be presented with a bounty of strange information. At first it might appear as if I’ve got one system providing banners for both Linux and Windows services. But a clever attacker might recognize that I’ve enabled port-forwarding, and from this, they would be able to determine that I’ve got both Linux and Windows systems within my network.