Caches are essential for speeding up systems worldwide. Storing frequently accessed data closer to users reduces latency and alleviates strain on backend resources, resulting in faster response times and enhanced user experiences. From internet browsing to finance and healthcare, caching plays a vital role in optimizing efficiency and ensuring swift access to critical information. With the rise of edge computing and IoT, caching is becoming even more indispensable for delivering content and services globally, bolstering performance and scalability.
Improving the performance of websites requires the introduction of LRU (Least Recently Used) caching techniques. LRU cache minimizes server load and expedites page loading by keeping frequently visited content, such as articles and photos, in memory. It works on the tenet of removing the least used content while maintaining accessibility to relevant content. This increases user pleasure, speeds up the website, and uses less server resources.
How to add LFU cache?
First of all, install the cache tools package.
pip install cache tools
Implement the following lines in the cache applied python file.
import cachetools
cache = cachetools.LFUCache(maxsize=2048)
* cache-tools: This is the name of the module being imported, which likely contains various caching utilities and classes.
* LFUCache: An LFU (Least Frequently Used) cache is a type of cache that evicts the least frequently used items first when the cache reaches its capacity.
* maxsize=2048: The maximum number of items that the cache can hold is specified by this option. In this instance, the cache's maximum size is set at 2048 items. The least recently used items will be deleted from the cache to create room for new things once it reaches this limit.
"Create a CACHE_TEMPLATES list to specify the pages you want to apply caching to, for example:
How to add LFU cache?
First of all, install the cache tools package.
pip install cachetools
Implement the following lines in the cache applied python file.
import cachetools
cache = cachetools.LFUCache(maxsize=2048)
cachetools: This is the name of the module being imported, which likely contains various caching utilities and classes.
LFUCache: An LFU (Least Frequently Used) cache is a type of cache that evicts the least frequently used items first when the cache reaches its capacity.
CACHE_TEMPLATES = [
'website.homepage',
'website.contactus',
]
Check if the template_id is in CACHE_TEMPLATES or not.
Create a cache key
Make a unique cache key for identifying pages, like template id, content, etc.
In an LRU Cache, each data entry is assigned a unique cache key. This key serves as the identifier for the stored item, allowing for efficient retrieval and management. When the cache reaches capacity, the key of the least recently used item is used to determine which entry to remove, ensuring optimal performance and resource utilization.
Eg : cache_key = f"{values}.{template_key}.{lang}.{write_date}”
Get data from Cache
cache_result = cache.get(cache_key)
* cache: This refers to the cache object, which seems to be an instance of an LRU (Least Recently Used) cache initialized earlier in the code.
* get(cache_key): On the cache object, this is a method call. When a value (cache_key) is supplied, the.get() method is used to obtain it from the cache. None is returned if the cache has no value linked to the key.
* cache_key: To store and retrieve values from the cache, use this key. The template, language, and write date are among the several aspects that determine its generation.
Set data on cache
cache[cache_key] = content
* cache: This is an LRUCache object created using cache tools. It's a Least Recently Used (LRU) cache, which means that if the cache reaches its maximum size (maxsize), it will evict the least recently used items to make space for new ones.
* cache_key: This is a string that uniquely identifies the data stored in cache.
* content: This variable holds the rendered content of the template. Stored the actual content that was stored for reducing page rendering time.
* cache[cache_key]: This line assigns the content to the cache with the key cache_key. This means that the next time the same template with the same key is requested, the content will be retrieved from the cache instead of re-rendering the template. This helps improve performance by avoiding redundant rendering operations.
Here is the full code of how can apply cache in website using cache tools
# -*- coding: utf-8 -*-
from odoo import models
import cachetools
# Cache configuration
cache = cachetools.LFUCache(maxsize=2048)
# Templates to cache and their expiry (in seconds)
CACHE_TEMPLATES = [
'website.homepage',
'website.contactus',
]
class QWeb(models.AbstractModel):
_inherit = 'ir.qweb'
def _render(self, template, values=None, **options):
"""
To fine-tune use --log-handler "odoo.addons.website_cache:DEBUG"
"""
if isinstance(template, int):
template_key = self.env['ir.ui.view'].browse(template).key
template_d = self.env['ir.ui.view'].browse(template)
else:
template_key = str(template)
template_d = self.env['ir.ui.view'].search([('key', '=', template)])
settings = template_key in CACHE_TEMPLATES
request = values.get('request', False)
write_date = template_d.write_date
if request and write_date and settings:
cache_key = f"{request.httprequest.url}_{template_key}_{write_date}_{settings}"
cache_key = cache_key.replace('http:', '').replace('https:', '').replace('/', '_')
cached_content = cache.get(cache_key, None)
if cached_content:
return cached_content
else:
content = super(QWeb, self)._render(template, values, **options)
cache[cache_key] = content
return content
else:
return super(QWeb, self)._render(template, values, **options)