by Bozho | Jun 15, 2020 | Aggregated, cache, Developer tips, spring
In a third post about cache managers in spring (over a long period of time), I’d like to expand on the previous two by showing how to configure multiple cache managers that dynamically create caches. Spring has CompositeCacheManager which, in theory, should allow using more than one cache manager. It works by asking the underlying cache managers whether they have a cache with the requested name or not. The problem with that is when you need dynamically-created caches, based on some global configuration. And that’s the common scenario, when you don’t want to manually define caches, but instead want to just add @Cacheable and have spring (and the underlying cache manager) create the cache for you with some reasonable defaults. That’s great until you need to have more than one cache managers. For example – one for local cache and one for a distributed cache. In many cases a distributed cache is needed; however not all method calls need to be distributed – some can be local to the instance that handles it and you don’t want to burden your distributed cache with stuff that can be kept locally. Whether you can configure a distributed cache provider to designate some cache to be local, even though it’s handled by the distributed cache provider – maybe, but I don’t guarantee it will be trivial. So, faced with that issue, I had to devise some simple mechanism of designating some caches as “distributed” and some as “local”. Using CompositeCacheManager alone would not do it, so I extended the distributed cache manager (in this case, Hazelcast, but it can be done with...
by Bozho | Apr 8, 2017 | Aggregated, cache, Developer tips, distributed cache
What’s a distributed cache? A solution that is “deployed” in an application (typically a web application) and that makes sure data is loaded from memory, rather than from disk (which is much slower), in order to improve performance and response time. That looks easy if the cache is to be used on a single machine – you just load your most active data from the database in memory (e.g. a Guava Cache instance), and serve it from there. It becomes a bit more complicated when this has to work in a cluster – e.g. 5 application nodes serving requests to users in a round-robin fashion. You have to update the in-memory cache on all machines each time a piece of data is updated by a request to one of the machines. If you just load all the data in memory and don’t invalidate it, the cache won’t be “coherent” – it will have stale values and requests to different application nodes will have different results, which you most certainly want to avoid. Or you can have a single big cache server with tons of memory, but it can die – and that may disrupt the smooth operation, so you’d want to have at least 2 machines in a cluster. You can get a distributed cache in different ways. To list a few: Infinispan (which I’ve covered previously), Terracotta/Ehcache, Hazelcast, Memcached, Redis, Cassandra, Elasticache(by Amazon). The former three are Java-specific (both JCache compliant), but the rest can be used in any setup. Cassandra wasn’t initially meant to be cache solution, but it can easily be used as such. All of...
Recent Comments