Tutorial on Routing Multiple Subdomains in Nginx for DevOps

Avatar

By squashlabs, Last Updated: June 30, 2023

Tutorial on Routing Multiple Subdomains in Nginx for DevOps

Reverse Proxy and its Purpose in Nginx

A reverse proxy is a server that sits between client devices and web servers, forwarding client requests to the appropriate server. Nginx is a popular web server and reverse proxy server that is commonly used in DevOps environments.

The purpose of a reverse proxy in Nginx is to improve performance, security, and scalability of web applications. It can distribute incoming client requests to multiple backend servers, balancing the load and preventing any single server from being overwhelmed.

In addition to load balancing, reverse proxies can also handle SSL termination, caching, compression, and other advanced features. Nginx provides a wide range of configuration options to customize reverse proxy behavior.

Let's take a look at an example of configuring a reverse proxy in Nginx:

http {
    upstream backend {
        server backend1.example.com;
        server backend2.example.com;
        server backend3.example.com;
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend;
        }
    }
}

In this example, we define an upstream block called "backend" that lists the backend servers. The proxy_pass directive is used in the location block to specify the upstream server group. The reverse proxy will distribute incoming requests to the backend servers defined in the upstream block.

Related Article: Tutorial: Configuring Multiple Apache Subdomains

Configuring Load Balancing for Multiple Subdomains in Nginx

Load balancing is a technique used to distribute incoming network traffic across multiple servers to optimize resource utilization, maximize throughput, and ensure high availability. Nginx provides useful load balancing capabilities that can be configured to handle multiple subdomains.

To configure load balancing for multiple subdomains in Nginx, you can use the server directive and define separate server blocks for each subdomain. Each server block can have its own upstream server group.

Here's an example configuration for load balancing multiple subdomains in Nginx:

http {
    upstream backend {
        server backend1.example.com;
        server backend2.example.com;
        server backend3.example.com;
    }

    server {
        listen 80;
        server_name subdomain1.example.com;

        location / {
            proxy_pass http://backend;
        }
    }

    server {
        listen 80;
        server_name subdomain2.example.com;

        location / {
            proxy_pass http://backend;
        }
    }

    server {
        listen 80;
        server_name subdomain3.example.com;

        location / {
            proxy_pass http://backend;
        }
    }
}

In this example, we define three server blocks for three subdomains: subdomain1.example.com, subdomain2.example.com, and subdomain3.example.com. Each server block specifies the server name and uses the proxy_pass directive to forward requests to the backend server group defined in the upstream block.

Techniques for SSL Termination with Nginx

SSL termination is the process of decrypting encrypted HTTPS traffic at the proxy server and forwarding it as unencrypted HTTP traffic to the backend servers. Nginx can be configured to handle SSL termination, providing a secure connection between clients and the reverse proxy.

There are several techniques for SSL termination with Nginx, including using self-signed certificates, using Let's Encrypt certificates, and using wildcard certificates.

Here's an example of configuring SSL termination with Nginx using a self-signed certificate:

http {
    server {
        listen 443 ssl;
        server_name example.com;

        ssl_certificate /etc/nginx/ssl/example.crt;
        ssl_certificate_key /etc/nginx/ssl/example.key;

        location / {
            proxy_pass http://backend;
        }
    }
}

In this example, we define a server block that listens on port 443 for HTTPS traffic. The ssl_certificate and ssl_certificate_key directives specify the path to the SSL certificate and private key files respectively. The proxy_pass directive is used to forward requests to the backend server.

Redirecting Subdomains to Different URLs using Nginx

Nginx provides a flexible and useful way to redirect subdomains to different URLs using the rewrite directive. This allows you to dynamically change the URL based on the subdomain that was requested.

Here's an example of redirecting subdomains to different URLs using Nginx:

http {
    server {
        listen 80;
        server_name subdomain1.example.com;

        location / {
            rewrite ^/(.*)$ http://example.com/subdomain1/$1 permanent;
        }
    }

    server {
        listen 80;
        server_name subdomain2.example.com;

        location / {
            rewrite ^/(.*)$ http://example.com/subdomain2/$1 permanent;
        }
    }

    server {
        listen 80;
        server_name subdomain3.example.com;

        location / {
            rewrite ^/(.*)$ http://example.com/subdomain3/$1 permanent;
        }
    }
}

In this example, we define three server blocks for three subdomains: subdomain1.example.com, subdomain2.example.com, and subdomain3.example.com. Each server block uses the rewrite directive to redirect requests to a different URL. The ^/(.*)$ regular expression captures the path and appends it to the new URL.

Related Article: Terraform Advanced Tips for AWS

Syntax for URL Rewriting in Nginx

URL rewriting is a useful feature in Nginx that allows you to modify or redirect incoming URLs based on specified rules. The syntax for URL rewriting in Nginx uses the rewrite directive, regular expressions, and optional flags.

The basic syntax for URL rewriting in Nginx is as follows:

location / {
    rewrite regex replacement [flag];
}

- regex is a regular expression that matches the part of the URL you want to rewrite.

- replacement is the new URL or the replacement string for the matched part.

- flag is an optional flag that modifies the behavior of the rewrite rule.

Here's an example of URL rewriting in Nginx:

location / {
    rewrite ^/blog/(.*)$ /articles/$1 last;
}

In this example, the ^/blog/(.*)$ regular expression matches any URL that starts with "/blog/" and captures the rest of the URL as a group. The /articles/$1 replacement string redirects the URL to "/articles/" followed by the captured group. The last flag indicates that the rewriting process should stop after this rule is applied.

Understanding Server Blocks in Nginx

Server blocks, also known as virtual hosts, are used in Nginx to define separate configurations for different domains or subdomains. Each server block can have its own configuration directives, including the listening port, server name, SSL settings, and location blocks.

Here's an example of a server block in Nginx:

server {
    listen 80;
    server_name example.com;

    location / {
        root /var/www/html;
        index index.html;
    }
}

In this example, we define a server block that listens on port 80 for requests to example.com. The location block specifies the document root directory and the default index file.

Nginx supports multiple server blocks, allowing you to host multiple websites or subdomains on the same server. Each server block can have its own configuration directives and can be customized to meet the specific requirements of the website or subdomain.

How Upstream Servers Work in Nginx

Upstream servers in Nginx are used for load balancing and proxying requests to backend servers. An upstream server group is defined using the upstream directive, and it can include one or more servers.

Here's an example of defining an upstream server group in Nginx:

http {
    upstream backend {
        server backend1.example.com;
        server backend2.example.com;
        server backend3.example.com;
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend;
        }
    }
}

In this example, we define an upstream server group called "backend" that includes three backend servers. The server directive is used to specify the backend servers by their domain names or IP addresses.

When a request is received by Nginx, it selects an upstream server from the server group based on the configured load balancing algorithm. By default, Nginx uses a round-robin algorithm to distribute requests evenly among the backend servers.

Purpose of the proxy_pass Directive in Nginx

The proxy_pass directive in Nginx is used to forward client requests to a specified backend server. It is commonly used in reverse proxy configurations to proxy requests to backend web servers or application servers.

Here's an example of using the proxy_pass directive in Nginx:

http {
    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend;
        }
    }
}

In this example, the proxy_pass directive is used in the location block to specify the backend server to which client requests should be forwarded. The http://backend argument is the URL of the backend server.

The proxy_pass directive can also be used to proxy requests to a specific IP address and port, or to a UNIX domain socket. It can also be used with variables to dynamically determine the backend server based on request parameters or headers.

Related Article: Smoke Testing Best Practices: How to Catch Critical Issues Early

Setting Custom Headers with the proxy_set_header Directive in Nginx

The proxy_set_header directive in Nginx is used to set custom headers in requests that are proxied to backend servers. It allows you to add, modify, or remove headers in the request before it is sent to the backend server.

Here's an example of using the proxy_set_header directive in Nginx:

http {
    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

In this example, the proxy_set_header directive is used to set the X-Forwarded-For and X-Real-IP headers in the request. The $proxy_add_x_forwarded_for and $remote_addr variables are used to populate the values of these headers respectively.

The proxy_set_header directive can be used to set any custom header in the request. It can also be used to remove headers by setting them to an empty value.

Exploring Wildcard DNS and its Use with Nginx

Wildcard DNS is a feature that allows you to create a DNS record that matches all subdomains of a given domain. It is commonly used to simplify the configuration of web servers and reverse proxies like Nginx.

To use wildcard DNS with Nginx, you need to create a DNS record for the domain and all its subdomains. The record should be a wildcard record, represented by an asterisk (*) in the hostname field.

Here's an example of using wildcard DNS with Nginx:

http {
    server {
        listen 80;
        server_name *.example.com;

        location / {
            proxy_pass http://backend;
        }
    }
}

In this example, the server_name directive is set to *.example.com, which matches all subdomains of example.com. Any request to a subdomain of example.com will be handled by this server block and forwarded to the backend server specified in the proxy_pass directive.

Additional Resources



- NGINX Reverse Proxy Configuration

- Understanding Reverse Proxy Servers in NGINX

- NGINX Load Balancing

More Articles from the The DevOps Guide series:

Intro to Security as Code

Organizations need to adapt their thinking to protect their assets and those of their clients. This article explores how organizations can change the… read more

Mastering Microservices: A Comprehensive Guide to Building Scalable and Agile Applications

Building scalable and agile applications with microservices architecture requires a deep understanding of best practices and strategies. In our compr… read more

DevOps Automation Intro

Automation is a key component of successful DevOps practices. This article explores the importance and implementation of automation in various aspect… read more

Quick and Easy Terraform Code Snippets

Managing infrastructure and deploying resources can be a daunting task, but with the help of Terraform code snippets, it can become quick and easy. T… read more

How to Migrate a Monolith App to Microservices

Migrate your monolithic app to microservices for a simpler, more scalable system. Learn the benefits, real-world examples, and steps to breaking down… read more

Why monitoring your application is important (2023 guide)

As a developer or IT professional, you understand that even the most well-built applications can encounter challenges. Performance bottlenecks, error… read more

Attributes of Components in a Microservice Architecture

In this article, we will explore the key attributes of components within a microservice architecture in a DevOps context. We will delve into the impl… read more

How to Design and Manage a Serverless Architecture

In this concise overview, gain a clear understanding of serverless architecture and its benefits. Explore various use cases and real-world examples, … read more

Terraform Tutorial & Advanced Tips

Enhance your Terraform skills with this tutorial that provides advanced tips for optimizing your infrastructure provisioning process. From understand… read more

An Overview of DevOps Automation Tools

Reviewing and comparing different DevOps automation tools in use today, this article provides an overview of the key aspects of DevOps automation. It… read more