Stale cache control directives with TeslaHTTPCache
Elixir HTTP caching libraries have been updated to version 0.4.0
with support for the “stale”
cache control directives.
These two directives are additional directives added by RFC5861 - HTTP Cache-Control Extensions for Stale Content.
They allow serving stale content in particular cases. Content is considered stale when its caching
time limit has expired. For instance, using the max-age
response header:
cache-control: max-age=60
this page will be considered stale in 1 minute and no longer served by caches by default.
However, stale pages remain in the cache for what is called a “grace” period.
http_cache
, the base Erlang HTTP caching library used
by higher-level libraries, instructs stores to keep cached response an additional 2 minutes after
their expiration. And stores may, or may not sweep them quickly.
And these pages can actually be reused when using the 2 additional cache control directives:
stale-if-error
stale-while-revalidate
Let’s explore how to use them with
TeslaHTTPCache
.
For this article, we’ll use the following minimal Phoenix server:
server.exs
Application.put_env(:phoenix, :json_library, Jason)
Application.put_env(:sample, SamplePhoenix.Endpoint, [
http: [ip: {127, 0, 0, 1}, port: 8080],
server: true,
secret_key_base: String.duplicate("a", 64)
])
Mix.install([
{:plug_cowboy, "~> 2.5"},
{:jason, "~> 1.0"},
{:phoenix, "~> 1.6"}
])
defmodule SamplePhoenix.SampleController do
use Phoenix.Controller
def index(conn, _) do
conn
|> put_resp_header("cache-control", <CACHE-CONTROL-DIRECTIVES>)
|> send_resp(200, "Here some good cacheable content!")
end
end
defmodule Router do
use Phoenix.Router
pipeline :browser do
plug :accepts, ["html"]
end
scope "/", SamplePhoenix do
pipe_through :browser
get "/", SampleController, :index
end
end
defmodule SamplePhoenix.Endpoint do
use Phoenix.Endpoint, otp_app: :sample
plug Router
end
{:ok, _} = Supervisor.start_link([SamplePhoenix.Endpoint], strategy: :one_for_one)
Process.sleep(:infinity)
The stale-if-error
cache-control directive
The stale-if-error=<seconds>
directive indicates that a stale page can be served if the origin
returns an error (or simply doesn’t respond). The RFC says:
In this context, an error is any situation that would result in a 500, 502, 503, or 504 HTTP response status code being returned.
TeslaHTTPCache
looks up for a stale response whenever:
- the origin returns such a error code
- the origin is unreachable (or any problem at the TCP/TLS/SSL level)
Note that the stale-if-error
directive can be used by both the client and the server. This is
both a request and response directive.
Let’s try to set up our server with:
def index(conn, _) do
conn
|> put_resp_header("cache-control", "max-age=0, stale-if-error=600")
|> send_resp(200, "Here some good cacheable content!")
end
max-age=0
means we actually don’t want to serve cached content in nominal requests. We could of
course instruct the cache to keep it for a few minutes, but in this case we want the server to always
serve fresh content, except when there is a problem.
Let’s try to request our server with Tesla:
iex> [{TeslaHTTPCache, %{store: :http_cache_store_process}}] \
...> |> Tesla.client() \
...> |> Tesla.get("http://localhost:8080")
{:ok,
%Tesla.Env{
method: :get,
url: "http://localhost:8080",
query: [],
headers: [
{"cache-control", "max-age=0, stale-if-error=600"},
{"date", "Sat, 17 May 2025 16:05:32 GMT"},
{"server", "Cowboy"},
{"content-length", "33"}
],
body: "Here some good cacheable content!",
status: 200,
...
}}
iex> [{TeslaHTTPCache, %{store: :http_cache_store_process}}] \
...> |> Tesla.client() \
...> |> Tesla.get("http://localhost:8080")
{:ok,
%Tesla.Env{
method: :get,
url: "http://localhost:8080",
query: [],
headers: [
{"cache-control", "max-age=0, stale-if-error=600"},
{"date", "Sat, 17 May 2025 16:05:33 GMT"},
{"server", "Cowboy"},
{"content-length", "33"}
],
body: "Here some good cacheable content!",
status: 200,
...
}}
No age
header in sight - it means there was no cached content served. Now lets stop our server and
observe the result:
iex> [{TeslaHTTPCache, %{store: :http_cache_store_process}}] \
...> |> Tesla.client() \
...> |> Tesla.get("http://localhost:8080")
{:ok,
%Tesla.Env{
method: :get,
url: "http://localhost:8080",
query: [],
headers: [
{"cache-control", "max-age=0, stale-if-error=600"},
{"date", "Sat, 17 May 2025 16:05:33 GMT"},
{"server", "Cowboy"},
{"content-length", "33"},
{"age", "63"}
],
body: "Here some good cacheable content!",
status: 200,
...
}}
Cached content was served, as shown by the {"age", "63"}
header. Without the stale-if-error
directive, we would have instead received the following response:
iex> [{TeslaHTTPCache, %{store: :http_cache_store_process}}] \
...> |> Tesla.client() \
...> |> Tesla.get("http://localhost:8080")
{:error, :econnrefused}
We could have used a request header instead, especially if the server doesn’t add its own
stale-if-error
directive:
iex> [{TeslaHTTPCache, %{store: :http_cache_store_process}}] \
...> |> Tesla.client() \
...> |> Tesla.get("http://localhost:8080")
{:error, :econnrefused}
iex> [{TeslaHTTPCache, %{store: :http_cache_store_process}}] \
...> |> Tesla.client() \
...> |> Tesla.get("http://localhost:8080", headers: [{"cache-control", "stale-if-error=600"}])
{:ok,
%Tesla.Env{
method: :get,
url: "http://localhost:8080",
query: [],
headers: [
{"cache-control", "max-age=0"},
{"date", "Sat, 17 May 2025 17:15:12 GMT"},
{"server", "Cowboy"},
{"content-length", "33"},
{"age", "18"}
],
body: "Here some good cacheable content!",
status: 200,
...
}}
Note, however, that when using the request header, the caching system has no clue how long to save
stale responses. Use http_cache
s
default_grace
option to fine tune the
retention time if needed:
[{TeslaHTTPCache, %{store: :http_cache_store_process, default_grace: 3600}}] |> Tesla.client() |> ...
This cache control header is ideal to protect your app from failures from 3rd party APIs as long as their content is cacheable.
The stale-while-revalidate
cache-control directive
The stale-while-revalidate
is a response cache directive only. It instructs the cache that it
can serve stale content but only if it asynchronously reloads the requested page in the background.
Let’s configure our server cache control header as follows: max-age=10, stale-while-revalidate=600
.
It means the page is considered fresh for 10 seconds, and any further request will trigger a refresh of this requested page, although the first client requesting it will receive the expired version.
Let’s try to request twice the page within 10 seconds:
iex> [{TeslaHTTPCache, %{store: :http_cache_store_memory}}] \
...> |> Tesla.client() \
...> |> Tesla.get("http://localhost:8080")
{:ok,
%Tesla.Env{
method: :get,
url: "http://localhost:8080",
query: [],
headers: [
{"cache-control", "max-age=10, stale-while-revalidate=600"},
{"date", "Sat, 17 May 2025 17:38:50 GMT"},
{"server", "Cowboy"},
{"content-length", "33"}
],
body: "Here some good cacheable content!",
status: 200,
...
}}
iex> [{TeslaHTTPCache, %{store: :http_cache_store_memory}}] \
...> |> Tesla.client()
...> |> Tesla.get("http://localhost:8080")
{:ok,
%Tesla.Env{
method: :get,
url: "http://localhost:8080",
query: [],
headers: [
{"cache-control", "max-age=10, stale-while-revalidate=600"},
{"date", "Sat, 17 May 2025 17:38:50 GMT"},
{"server", "Cowboy"},
{"content-length", "33"},
{"age", "7"}
],
body: "Here some good cacheable content!",
status: 200,
...
}}
The second request has an age
header - cached content was returned.
Now let’s request two more times after the 10s lifetime:
iex> [{TeslaHTTPCache, %{store: :http_cache_store_memory}}] \
...> |> Tesla.client() \
...> |> Tesla.get("http://localhost:8080")
{:ok,
%Tesla.Env{
method: :get,
url: "http://localhost:8080",
query: [],
headers: [
{"cache-control", "max-age=10, stale-while-revalidate=600"},
{"date", "Sat, 17 May 2025 17:38:50 GMT"},
{"server", "Cowboy"},
{"content-length", "33"},
{"age", "20"}
],
body: "Here some good cacheable content!",
status: 200,
...
}}
iex> [{TeslaHTTPCache, %{store: :http_cache_store_memory}}] \
iex> |> Tesla.client() \
iex> |> Tesla.get("http://localhost:8080")
{:ok,
%Tesla.Env{
method: :get,
url: "http://localhost:8080",
query: [],
headers: [
{"cache-control", "max-age=10, stale-while-revalidate=600"},
{"date", "Sat, 17 May 2025 17:39:09 GMT"},
{"server", "Cowboy"},
{"content-length", "33"},
{"age", "4"}
],
body: "Here some good cacheable content!",
status: 200,
...
}}
The age
header shows that stale-while-revalidate
was effectively used:
- the first request receives the stale response (20 seconds) and triggers a refresh in the background
- the second request receives the fresh response that was just asynchronously completed
One should not hesitate to set huge values for stale-while-revalidate
if it suits the needs
(days or weeks!).
There are (at least) 2 use-cases for this directive:
- when, for SEO reason, you need to answer as fast as possible, even if the content is slightly outdated
- when you need to have a cache always hot: in this case you can warmup your cache on startup and
then rely on
stale-if-revalidate
to always have an entry in the cache for an associated resource and have your users triggering the refresh mechanism for you
Since stale-while-revalidate
is a response header, you might need to manually set it if the
target server doesn’t return it. You can simply write a Tesla middleware that would look like this:
defmodule MyApp.Utils.TeslaMiddleware.SetStaleWhileRevalidate do
@behaviour Tesla.Middleware
@impl true
def call(env, next, opts) do
case Tesla.run(env, next) do
{:ok, %Tesla.Env{status: status} = env} when status in 200..299 ->
{:ok, Tesla.put_header(env, "cache-control", "max-age=600, stale-while-revalidate=3600")}
other ->
other
end
end
end
and then add it to your middlewares:
defp client() do
Tesla.client([
Tesla.Middleware.Logger,
{Tesla.Middleware.BaseUrl, "https://my-api.example.com"},
Tesla.Middleware.JSON,
{TeslaHTTPCache, %{store: :http_cache_store_memory}},
MyApp.Utils.TeslaMiddleware.SetStaleWhileRevalidate
])
end
PlugHTTPCache
support and PlugLoopback
Turns out PlugHTTPCache
was updated as well to support
the stale-while-revalidate
directive. It’s enabled by default and all you have to do is to
set the stale-while-revalidate
directive in your controllers.
And… that’s it.
Unless there are some more goodies?
Indeed.
In order to implement stale-while-revalidate
we need to launch asynchronously a new, almost
identical request to the same endpoint. The only difference is that we disallow returning a stale
response (using the cache control directive max-stale=0
).
This is why the PlugLoopback
library was created. This
library helps with creating new Plug.Conn{}
s either:
- based on existing ones
- from a Phoenix endpoint
Copying an existing conn
can be performed with the
PlugLoopback.replay/1
function.
PlugHTTPCache
uses it to support stale-while-revalidate
:
Task.start(fn ->
conn
|> PlugLoopback.replay()
|> Plug.Conn.update_req_header("cache-control", "max-stale=0", &(&1 <> ", max-stale=0"))
|> add_validator(cached_headers, "last-modified", "if-modified-since")
|> add_validator(cached_headers, "etag", "if-none-match")
|> PlugLoopback.run()
end)
It’s also possible to create a new conn from an existing endpoint, using
PlugLoopback.from_phoenix_endpoint/1
.
Might be useful to debug your Phoenix endpoints in the shell.
Let’s run it on plug_http_cache_demo
:
iex> PlugHTTPCacheDemoWeb.Endpoint
...> |> PlugLoopback.from_phoenix_endpoint()
...> |> PlugLoopback.get("/fibo?number=42")
...> |> PlugLoopback.run()
%Plug.Conn{
adapter: {PlugLoopback.Adapter, :...},
assigns: %{},
body_params: %{},
cookies: %{
"_plug_http_cache_demo_key" => "SFMyNTY.g3QAAAABbQAAAAdhYi10ZXN0bQAAAAFh.u9bHKaB74qw8pyCyCX_T_urMcxmOfpYNhwaG5Hm8uIo"
},
halted: true,
host: "localhost",
method: "GET",
owner: #PID<0.469.0>,
params: %{"number" => "42"},
path_info: ["fibo"],
path_params: %{},
port: 4000,
private: %{
:phoenix_endpoint => PlugHTTPCacheDemoWeb.Endpoint,
:plug_session_info => :write,
:plug_session_fetch => :done,
:plug_session => %{"ab-test" => "a"},
:before_send => [#Function<0.98198498/1 in Plug.CSRFProtection.call/2>,
#Function<2.127108782/1 in Phoenix.Controller.fetch_flash/2>,
#Function<0.9035112/1 in Plug.Session.before_send/2>,
#Function<0.106864063/1 in Plug.Telemetry.call/2>,
#Function<1.39858237/1 in Phoenix.LiveReloader.before_send_inject_reloader/3>],
PlugHTTPCacheDemoWeb.Router => {[], %{}},
:phoenix_router => PlugHTTPCacheDemoWeb.Router,
:phoenix_flash => %{},
:phoenix_format => "html",
:phoenix_root_layout => {PlugHTTPCacheDemoWeb.LayoutView, :root}
},
query_params: %{"number" => "42"},
query_string: "number=42",
remote_ip: {127, 0, 0, 1},
req_cookies: %{},
req_headers: [{"ab-test", "a"}],
request_path: "/fibo",
resp_body: "<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <meta charset=\"utf-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n<meta content=\"MDsVPityH1w4ITsjNBcCKiBxIVQLJQZyjpQrm_l8uStUrDmuN5d6Gu_E\" name=\"csrf-token\">\n<title data-suffix=\" · Phoenix Framework\">PhoenixHttpCacheDemo · Phoenix Framework</title>\n <link phx-track-static rel=\"stylesheet\" href=\"/assets/app.css\">\n <script defer phx-track-static type=\"text/javascript\" src=\"/assets/app.js\"></script>\n </head>\n <body>\n <header>\n <section class=\"container\">\n <nav>\n <ul>\n <li><a href=\"/\">Home</a></li>\n <li><a href=\"/dashboard\">LiveDashboard</a></li>\n </ul>\n </nav>\n <a href=\"https://phoenixframework.org/\" class=\"phx-logo\">\n <img src=\"/images/phoenix.png\" alt=\"Phoenix Framework Logo\">\n </a>\n </section>\n </header>\n<main class=\"container\">\n <p class=\"alert alert-info\" role=\"alert\"></p>\n <p class=\"alert alert-danger\" role=\"alert\"></p>\n<section class=\"phx-hero\">\n <h1>Welcome to PhoenixHTTPCache demo!</h1>\n</section>\n\n<section>\n <h2>Info</h2>\n <p style=\"overflow-wrap: break-word;\">\n fibo(42) = 267914296\n </p>\n</section>\n</main>\n <iframe hidden height=\"0\" width=\"0\" src=\"/phoenix/live_reload/frame\"></iframe></body>\n</html>",
resp_cookies: %{
"_plug_http_cache_demo_key" => %{
value: "SFMyNTY.g3QAAAABbQAAAAdhYi10ZXN0bQAAAAFh.u9bHKaB74qw8pyCyCX_T_urMcxmOfpYNhwaG5Hm8uIo"
}
},
resp_headers: [
{"set-cookie",
"_plug_http_cache_demo_key=SFMyNTY.g3QAAAABbQAAAAdhYi10ZXN0bQAAAAFh.u9bHKaB74qw8pyCyCX_T_urMcxmOfpYNhwaG5Hm8uIo; path=/; HttpOnly"},
{"content-type", "text/html; charset=utf-8"},
{"vary", "ab-test, accept-encoding"},
{"cache-control", "s-maxage=1200, public"},
{"x-request-id", "GEBlfW_qdCwsxewAAAFE"},
{"x-frame-options", "SAMEORIGIN"},
{"x-xss-protection", "1; mode=block"},
{"x-content-type-options", "nosniff"},
{"x-download-options", "noopen"},
{"x-permitted-cross-domain-policies", "none"},
{"cross-origin-window-policy", "deny"},
{"content-length", "1297"},
{"age", "30"}
],
scheme: :http,
script_name: [],
secret_key_base: :...,
state: :sent,
status: 200
}
Note that the response body is returned - which is not always the case with other adapters
(PlugLoopback
has its own) since it’s usually unnecessary as the response body is sent through
the adapter to the HTTP client.
An obvious use-case for this library would be, used along with PlugHTTPCache
, to warmup your
cache on application startup. Assuming you have a sitemap and those pages are cacheable, you could,
for example, launch a process at startup that:
- gets your
sitemap.xml
usingPlugLoopback
and decodes it - generate a stream from the sitemap
- use
Task.async_stream/3
to request each URL withPlugLoopback
and letPlugHTTPCache
cache the responses
PlugLoopback
supports replaying JSON and URLencoded request bodies, but still remains very v0.1.0
at this time. Contributions are welcome!
Le mot de la fin
Happy caching!