Reverse Proxy Feature of NGINX Server

Joseph Steven Yakubu
6 min readJan 17, 2022
Photo by NGINX

What is NGINX?

NGINX which is pronounced as “Engine X” was originally developed as a lightweight, high-performance and an open-source web server to facilitate and also address the increasing concurrency demands of the modern web. This server focuses on high performance, high concurrency, and low resources usage especially when serving static files, all of these were possible because it used an asynchronous, non-blocking, and event-driven connection handling algorithm.

NGINX is more than just a web server, it is a Reverse Proxy Server at its core, which is why I would zero in on the feature, it, however, gained a lot of its popularity as a web server because it solved the C10k problem which was a limitation for web servers at that time, this limitation was that web servers couldn’t concurrently serve more than ten thousand (10k) customers.

NGINX came unto the scene as a result of leveraging on the knowledge of concurrency problems, it met other competitors in the market like the Apache HTTP Server (httpd) which was more flexible, however, server administrators preferred NGINX because it could handle a higher number of concurrent requests and it delivered static content at blazing speeds with minimal resource usage.

How NGINX Handles Requests

Photo by NGINX

The basic architecture for NGINX comprises Master and Worker processes. The Master reads configurations files, manages worker processes, listens to requests from clients, and allocates them to the worker while workers listen to and process the requests allocated from the Master. Each worker has a single thread that utilizes an event-driven architecture to asynchronously handle more than a thousand requests at the time.

This is made possible by making use of a non-blocking fast looping mechanism called the Reactor Design Pattern that continuously checks for and processes events with low resources (CPU and Memory).

The reactor design pattern is an event handling pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers

Photo by NGINX

What is a Reverse Proxy?

Photo by hide-id-proxy

A Reverse Proxy is a server that accepts clients request for data, forwards that request to backend servers and responds with the results from the actual server that processed the request as if it was the proxy server that processed the request. The client only communicates directly with the reverse proxy server, its unaware of the backend server that actually processes the request.

Besides the above use case, reverse proxies also grant many other benefits. The section below discusses some of their major advantages.

Load Balancing

There is a limit to the incoming traffic from client requests a single server can handle especially when the number runs into millions of daily unique requests. At instances like these, the best approach is to distribute the traffic of client requests to multiple upstream servers evenly which improves performance and redundancy by hosting the same content to reliably eliminate a single point of failure or to each server that performs a specific function it’s optimized for.

A reverse proxy receives incoming traffic before it gets to the intended server and uses an algorithm to distribute the traffic among the pool of servers to prevent the servers from being overloaded or crashing which would affect site functionality or downtime. The reverse proxy can then gather responses from the pool of servers and deliver them to respective clients. Reverse proxies are referred to as Load Balancers because they’re mostly used for load balancing.

Global Server Load Balancing (GSLB)

GSLB is an advanced load balancing method for distributing website traffic from client requests among many servers placed strategically based on some criteria around to world. This is possible via the Anycast DNS which is a traffic routing algorithm used for the speedy delivery of website content that advertises individual IP addresses on multiple nodes, the reverse proxy now picks the server node based on metrics like fastest travel time between client and server.

GSLB doesn’t only increase the site’s reliability and security considerations, it also reduces the time website content is served to the client based on the shortest path, thereby enhancing user experience. GSLB can be set up manually on servers, this feature is usually taken care of by dedicated Content Delivery Networks.

Enhanced Security

NGINX Reverse Proxy can be used to hide or conceal IP addresses and other information of backend servers where the website content are hosted, since the client confuses the reverse proxy as the webserver, the web servers can maintain anonymity better which increases security significantly since the reverse proxy serves as a single point of entry to the servers.

Since the reverse proxy receives the request, any attackers or hackers will find it hard to target the backend servers with security threats like DDoS attacks which means distributed denial of service where there is a malicious attempt to make an online service or resource unavailable.

Reverse Proxy makes it easier to remove malware or takedowns since it’s being attacked infected in place of the backend servers. Firewalls can also be used to tighten security.

Powerful Caching

NGINX Reverse Proxy can be used for web acceleration purposes through caching which is temporarily storing both static and dynamic content. This can reduce the time taken to load web content since the traffic from client requests gets served directly by the Reverse Proxy instead of waiting for a roundtrip from the backend server, this invariably delivers better performance resulting in a faster website experience and also reduces the workload for the backend servers.

For instance, if the website is hosted in the USA and a client makes a request from Africa, the cached website can be served by the Reverse Proxy In Africa instead of making a round trip to the web server in the USA. This makes the website load faster. NGINX FastCGI is a perfect example.

Superior Compression

NGINX Reverse Proxy compresses the response to traffic from a client request for both static and dynamic web content. Compressing responses to client requests can further speed up the load time for web content besides the benefit of caching and which we spoke about earlier and also reduce the amount of bandwidth required to move the response to the client.

Optimized SSL Encryption

Encrypting and decrypting SSL/TLS requests for each client can be highly taxing for the origin server. A reverse proxy can take up this task to free up the origin server’s resources for other important tasks, like serving content.

Another advantage of offloading SSL/TSL encryption and decryption is to reduce latency for clients that are geographically distant from the origin server.

Monitoring and Logging Traffic

A reverse proxy captures any requests that go through it. Hence, you can use them as a central hub to monitor and log traffic. Even if you use multiple web servers to host all your website’s components, using a reverse proxy will make it easier to monitor all the incoming and outgoing data from your site.

Thanks for taking out time to read through my article.

I crave your indulgence to leave a comment with either suggestions or additions to this article in ways I could do better. I am open to learning and I also welcome constructive criticism.

I hope it was a great read. Thank you in advance for leaving claps 👏 to show your support and appreciate my effort.

--

--