You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This will improve the performance since we are going to hit the backend
just once to read all keys, as long the cache adapter implements fetch
multi support like dalli.
For example:
json.cache! :x do
json.x true
end
json.cache! :y do
json.y true
end
json.cache! :z do
json.z true
end
This example was hitting the memcached 6 times on cache miss:
1. read x
2. write x
3. read y
4. write y
5. read z
6. write z
And 3 times on cache hit:
1. read x
2. read y
3. read z
After this change, 4 times on cache miss:
1. read multi x,y,z
2. write x
3. write y
4. write z
And 1 time on cache hit:
1. read multi x,y,z
Note that in the case of different options, one read multi will be made
per each options, i.e.:
json.cache! :x do
json.x true
end
json.cache! :y do
json.y true
end
json.cache! :z, expires_in: 10.minutes do
json.z true
end
json.cache! :w, expires_in: 10.minutes do
json.w true
end
In the case of cache miss:
1. read multi x,y
2. write x
3. write y
4. read multi z,w
5. write z
5. write w
In the case of cache hit:
1. read multi x,y
2. read multi z,w
That's because Rails.cache.fetch_multi signature is limited to use the
same options for all given keys.
And for last, nested cache calls are allowed and will follow recursively
to accomplish the same behavior, i.e.:
json.cache! :x do
json.x true
json.cache! :y do
json.y true
end
json.cache! :z do
json.z true
end
end
json.cache! :w do
json.w true
end
In the case of cache miss:
1. read multi x,w
2. read multi y,z
3. write y
4. write z
5. write x
6. write w
In the case of cache hit:
1. read multi x,w
The same rule of options will be applied, if you have different options,
one hit per options.
This is the result of an investigation in application that was spending
15% of the time by hitting the memcached multiple times.
We were able to reduce the memcached time to 1% of the request by using
this algorithm.
Thanks to @samflores for helping me on the initial idea.
0 commit comments