NGINX Config Rewind: Serverion Revives the Lost Art of Proxy Cache Tuning

NGINX Config Rewind: Serverion Revives the Lost Art of Proxy Cache Tuning

Want faster websites and lower server loads? NGINX proxy caching is your solution. By storing frequently requested content, it speeds up delivery and reduces strain on your origin servers. Serverion shares practical tips to optimize your cache setup for better performance and reliability.

Key Takeaways:

  • Serve stale content: Use cached responses during server downtime with proxy_cache_use_stale.
  • Background updates: Refresh cache entries without disrupting users using proxy_cache_background_update.
  • Prevent overloads: Avoid overwhelming your origin server with proxy_cache_lock.

Example Setup:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; proxy_cache my_cache; proxy_cache_use_stale updating; proxy_cache_background_update on; proxy_cache_lock on; 

These settings ensure fast responses, efficient resource use, and reliable content delivery. Whether you’re running a small VPS or a high-traffic server, these techniques can help you get the most out of NGINX proxy caching.

NGINX: Content Caching with Reverse Proxy (Super FAST …

NGINX

NGINX Proxy Caching Fundamentals

Serverion’s cache-tuning techniques rely on core principles of NGINX proxy caching, which involves storing and serving copies of origin content. The system uses three main components: the cache path, a shared memory zone, and a cache manager that removes expired or least-recently-used (LRU) files when the cache reaches its limit.

NGINX Proxy Cache Operation

When NGINX processes a request, it first checks its shared memory zone to see if the requested content is already cached. This in-memory lookup allows for quick determination of cache hits or misses. For reference, a 1 MB keys zone can store approximately 8,000 cache keys[1].

Here’s how the caching process works:

  • NGINX hashes the request to create a unique cache key.
  • It checks the shared memory zone for that key.
  • If the key is found (cache hit), the content is served directly from the cache.
  • If the key is not found (cache miss), the content is fetched from the origin server and stored in the cache for future use.

Serverion optimizes performance by ensuring efficient key lookups and organizing the cache storage using directory hierarchies.

Core Cache Elements

Directive Purpose Impact
proxy_cache_path Specifies the cache storage location Determines where and how content is cached
proxy_cache Activates caching for specific requests Enables caching within a location block
keys_zone Allocates shared memory for cache keys Allows fast in-memory lookups
inactive Defines how long unused items stay in cache Controls cache freshness and eviction timing

To maximize performance, use a two-level levels hierarchy to prevent filesystem slowdowns. Additionally, set use_temp_path=off to write cached files directly to their final location, reducing I/O overhead.

NGINX respects cache directives from the origin server. It only stores responses that include an Expires header with a future date or a Cache-Control header with a max-age value greater than zero.

You can now apply these principles in your NGINX proxy cache setup.

[1] NGINX documentation: A 1 MB keys zone stores data for about 8,000 keys.

NGINX Proxy Cache Setup Guide

Learn how to configure and optimize NGINX proxy caching step by step.

Cache Parameter Settings

The foundation of NGINX proxy cache setup is the proxy_cache_path directive. Here’s an example configuration:

proxy_cache_path /var/cache/nginx      levels=1:2      keys_zone=my_cache:10m      max_size=10g      inactive=60m      use_temp_path=off; 

This configuration creates a two-level directory structure, allocates 10 MB for the keys_zone (enough for approximately 80,000 keys), sets a maximum cache size of 10 GB, and defines an inactive timeout of 60 minutes.

You can also include these optional directives for better control:

Directive Purpose
proxy_cache_use_stale Serves stale content if origin servers are unavailable
proxy_cache_revalidate Uses conditional GET requests to check if content is still valid
proxy_cache_background_update Refreshes stale content in the background
proxy_cache_lock Prevents multiple requests from overwhelming the origin server

After defining these parameters, allocate memory and disk space based on your expected traffic.

Cache Size Management

To effectively size your cache, consider both memory and disk usage. Here’s how:

  • Memory Zone Sizing Allocate memory for the keys_zone to match your caching needs:
    keys_zone=enterprise_cache:100m;  # Supports approximately 800,000 cache keys 
  • Disk Space Allocation Adjust the proxy_cache_path to specify the maximum disk space:
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=enterprise_cache:100m max_size=10g inactive=24h use_temp_path=off; 

Once these parameters are set, you’re ready to initialize and enable your cache.

Cache Initialization

After fine-tuning your parameters and sizing, follow these steps to activate caching:

  1. Use the proxy_cache_path directive from the example above and add proxy_cache my_cache to your configuration.
  2. Enable caching within the relevant server or location block:
    proxy_cache my_cache; 
  3. Optionally, include any of the fine-tuning directives mentioned earlier to enhance performance.
  4. Monitor the cache status by adding a custom header:
    add_header X-Cache-Status $upstream_cache_status; 

Note: According to NGINX documentation, a 1 MB keys_zone can store approximately 8,000 keys.

This setup ensures your cache is ready to handle traffic efficiently while maintaining flexibility for adjustments.

Enterprise NGINX Cache Management

Once your cache path and parameters are set, it’s time to scale your setup to handle enterprise-level traffic effectively.

Cache Hit Rate Optimization

To improve cache efficiency, enable features like conditional requests and background updates:

proxy_cache_revalidate on; proxy_cache_background_update on; proxy_cache_use_stale updating; 

Prevent overwhelming your origin server by configuring these settings:

proxy_cache_lock on; proxy_cache_lock_timeout 5s; proxy_cache_min_uses 2; 

For high-traffic environments, distribute the cache load across multiple storage devices to enhance performance:

split_clients "${request_uri}" $disk {     20% "/data/cache1";     20% "/data/cache2";     20% "/data/cache3";     20% "/data/cache4";     *   "/data/cache5"; } 

Once your cache is optimized for performance, focus on securing it to handle sensitive content.

Cache Security Controls

To protect sensitive requests, bypass caching and customize cache keys as needed:

proxy_cache_bypass $http_pragma; proxy_cache_bypass $cookie_nocache; proxy_ignore_headers Cache-Control; 

For personalized content or cookie-based requests, adjust the cache key and supported methods:

proxy_cache_key     "$host$request_uri$cookie_user"; proxy_cache_methods GET HEAD POST; 

After securing your cache, ensure you’re continuously monitoring its performance.

Cache Performance Tracking

Monitor cache behavior using status definitions to fine-tune your setup:

Status Definition
UPDATING Stale content served while an update is in progress
REVALIDATED Cached content was revalidated with the origin server

Analyze the X-Cache-Status metrics regularly and adjust directives to align with traffic patterns for optimal results.

Serverion‘s NGINX Cache Configuration

Serverion

Serverion customizes NGINX cache settings based on the specific needs of each workload. By using core directives, they optimize cache configurations differently for VPS and dedicated servers.

Cache Paths by Workload

VPS Workloads

For VPS setups, this configuration strikes a balance between memory efficiency and fast response times:

proxy_cache_path /data/nginx/cache levels=1:2  keys_zone=SERVERCACHE:10m max_size=10g  inactive=60m use_temp_path=off; proxy_cache_key "$scheme$request_method$host$request_uri"; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m; 

The keys_zone size is set to accommodate approximately 80,000 keys.

Dedicated Servers

For high-traffic applications on dedicated servers, Serverion uses a distributed caching system across multiple SSDs:

proxy_cache_path /cache1 levels=1:2 keys_zone=cache1:10m; proxy_cache_path /cache2 levels=1:2 keys_zone=cache2:10m; proxy_cache_path /cache3 levels=1:2 keys_zone=cache3:10m; split_clients "${request_uri}" $cachezone {     33%  "cache1";     33%  "cache2";     *    "cache3"; } 

This setup distributes cache writes evenly across three SSDs using the split_clients directive.

Specific values for these configurations are derived from Serverion’s Cache Parameter Reference Table.

Infrastructure Settings

To further enhance performance, NGINX worker settings are adjusted to efficiently handle cache input and output:

worker_processes auto; worker_connections 1024; worker_cpu_affinity 0-3;  # align workers with CPU cores 

These adjustments ensure that cached responses are delivered with maximum efficiency.

Summary: NGINX Cache Tuning Results

Serverion improved performance and reliability across its hosting systems through detailed proxy cache adjustments. By refining cache hierarchy, managing freshness settings, and optimizing header processing, they maintained seamless content delivery. Real-time X-Proxy-Cache metrics enabled IT teams to adjust cache settings effectively, leading to quicker response times, less strain on origin servers, and better availability for enterprise operations.

Related Blog Posts

en_US