tlc.service.tlc_lrucache
#
Module Contents#
Classes#
Class |
Description |
---|---|
LRUCache backend configuration. |
|
In-memory LRU cache backend. |
|
LRU cache where you can control how many slots are available, maximum memory to use for the cache, and a cache time out for the items. |
Functions#
Function |
Description |
---|---|
Decorator to add an LRU cache to a function. |
Data#
Data |
Description |
---|---|
API#
- tlc.service.tlc_lrucache.LRU_STATS_KEY = __stats__#
- class tlc.service.tlc_lrucache.LRUCacheBackendConfig#
Bases:
pydantic.BaseModel
LRUCache backend configuration.
- class tlc.service.tlc_lrucache.LRUCacheBackend(config: tlc.service.tlc_lrucache.LRUCacheBackendConfig)#
Bases:
starlite.cache.base.CacheBackendProtocol
In-memory LRU cache backend.
Initialize
LRUCacheBackend
- tlc.service.tlc_lrucache.LRUFuncCache(max_entries: int, max_memory_in_bytes: int, time_threshold_in_seconds: float, time_out_in_seconds: float = 0.0) Callable #
Decorator to add an LRU cache to a function.
The decorator can control the number of cache slots (max_entries) and how much memory to use for cached element (max_memory).
In addition, the decorator can set how long a function execution must take before the result is cached (time_threshold), to avoid caching results that are fast to compute or retrieve, thus only using the cache for slower items.
The time_out parameter can be used to set how long each cached item should remain valid. If set to 0, the items will never expire.
- class tlc.service.tlc_lrucache.LRUCache(max_entries: int, max_memory: int, time_out: float = 0.0)#
LRU cache where you can control how many slots are available, maximum memory to use for the cache, and a cache time out for the items.
The stats() method will return a dictionary of important statistics about the cache. The clear() method will clear the cache and reset all statistics.
- LRUEntry = None#