Architecture – Squid treatment concurrent cache

We use Squid cache to offload traffic from our web server, i.e. it is set up as a reverse proxy to respond to inbound requests before they reach our web server.

< /p>

When we blitzed concurrent requests for the same request that is not in the cache, Squid will proxy all requests to our Web (“origin”) server. For us, this behavior is not Ideal: Our origin server is in trouble, trying to satisfy N identical requests at the same time.

Instead, we want the first request to be proxied to the origin server, and the rest of the requests are queued at the Squid layer, and then act as the origin server When responding to the first request, Squid will complete all requests.

Does anyone know how to configure Squid to do this?

We have read the document many times and searched the topic through the web, but can’t figure out how to do it.

We also use Akamai, and the interesting thing is that this is its Default behavior. (However, Akamai has so many nodes, even if Akamai’s super node function is enabled, we will still see a large number of concurrent requests under certain traffic peaks.)

For some others Cache, this behavior is obviously configurable, for example. The Ehcache documentation provides the option “Concurrent cache miss: A cache miss will cause the filter chain upstream of the cache filter to be processed. In order to avoid threads requesting the same key to execute useless Repeating work, these threads block the first thread. “

Some people call this behavior “blocking cache” because subsequent concurrent requests block the first request until it completes or times out.

p>

Look at my rookie question!

Oliver

You are looking for folding forwarding:
http:/ /www.squid-cache.org/Versions/v2/2.7/cfgman/collapsed_forwarding.html

Provide 2.6 and 2.7, but not yet in 3.x.

When something exists in the cache but it is old, you may also be interested in stale-while-reavlidate:
http://www.mnot.net/blog/2007/12/12/stale

< /div>

We use Squid cache to offload traffic from our web server, i.e. it is set up as a reverse proxy to respond to inbound requests before they reach our web server.

When we blitzed concurrent requests for the same request that is not in the cache, Squid will proxy all requests to our Web (“origin”) server. For us, this behavior is not ideal : Our origin server is in trouble, trying to satisfy N same requests at the same time.

Instead, we want the first request to be proxied to the origin server, and the rest of the requests are queued at the Squid layer, and then when the origin server responds On the first request, Squid will complete all requests.

Does anyone know how to configure Squid to do this?

We have read the document many times and searched the topic through the web, but can’t figure out how to do it.

We also use Akamai, and the interesting thing is that this is its Default behavior. (However, Akamai has so many nodes, even if Akamai’s super node function is enabled, we will still see a large number of concurrent requests under certain traffic peaks.)

For some others Cache, this behavior is obviously configurable, for example. The Ehcache documentation provides the option “Concurrent cache miss: A cache miss will cause the filter chain upstream of the cache filter to be processed. In order to avoid threads requesting the same key to execute useless Repeating work, these threads block the first thread. “

Some people call this behavior “blocking cache” because subsequent concurrent requests block the first request until it completes or times out.

p>

Look at my rookie question!

Oliver

You are looking for folding forwarding:
http://www.squid-cache.org/Versions/v2 /2.7/cfgman/collapsed_forwarding.html

2.6 and 2.7 are available, but not yet in 3.x.

When something exists in the cache but is stale, you may Also interested in stale-while-reavlidate:
http://www.mnot.net/blog/2007/12/12/stale

Leave a Comment

Your email address will not be published.