


Engineering Team
2018-12-04
15 mins
We began by looking into which frameworks we could pair with Java Spring, which we were using for the microservice in question.
Thanks to this performance, Redis can be used as a database, cache, and message broker, wherever low latency is key.


Extremely fast read/write operations
Data persistence and replication
Support for multiple data structures (lists, sets, hashes, bitmaps)
Clustered setup for high availability and scaling
Whilst Jedis is simple to use and supports a vast number of Redis features, it is not thread-safe and therefore requires connection pooling in multi-threaded environments. Connection pooling, however, comes at the cost of maintaining a physical connection per Jedis instance, which increases the number of Redis connections.
This asynchronous model enables your threads to remain productive, performing other tasks whilst waiting for I/O. However, there are cases where synchronous access might be preferable, for instance when tasks are very short-lived or require immediate data consistency.
To begin, create a new configuration class and annotate it as follows:
@Configuration
@EnableConfigurationProperties(RedisProperties.class)
public class RedisConfig {
}
This will generate the configuration bean with the corresponding properties. Next, include the host and port for your Redis server in your application properties:
@Value("${redis.host}")
private String redisHost;
@Value("${redis.port}")
private int redisPort;
We’ll then create beans for our client resources and a standalone configuration. We’ll also define client options.
@Bean(destroyMethod = "shutdown")
ClientResources clientResources() {
return DefaultClientResources.create();
}
@Bean
public RedisStandaloneConfiguration redisStandaloneConfiguration() {
return new RedisStandaloneConfiguration(redisHost, redisPort);
}
@Bean
public ClientOptions clientOptions() {
return ClientOptions.builder()
.disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS)
.autoReconnect(true)
.build();
}
Here, the client options specify that Redis will reject any commands if disconnected, whilst automatically attempting to reconnect should the connection fail.
Having multiple connections available is advantageous because:
It enables concurrent Redis communication from multiple threads.
It avoids creating connections on the fly, improving performance.
Configuration is defined once and reused.
It simplifies the setup of Redis clusters.
@Bean
LettucePoolingClientConfiguration lettucePoolConfig(ClientOptions options, ClientResources dcr) {
return LettucePoolingClientConfiguration.builder()
.poolConfig(new GenericObjectPoolConfig())
.clientOptions(options)
.clientResources(dcr)
.build();
}
@Bean
public RedisConnectionFactory connectionFactory(
RedisStandaloneConfiguration redisStandaloneConfiguration,
LettucePoolingClientConfiguration lettucePoolConfig) {
return new LettuceConnectionFactory(redisStandaloneConfiguration, lettucePoolConfig);
}
@Bean
@ConditionalOnMissingBean(name = "redisTemplate")
@Primary
public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<Object, Object> template = new RedisTemplate<>();
template.setConnectionFactory(redisConnectionFactory);
return template;
}
By marking the redisTemplate bean as @Primary, we ensure that this particular instance is used wherever multiple qualifying beans exist.
@Value("${app.environment}")
private String ENV;
@Value("${taskScheduler.poolSize}")
private int tasksPoolSize;
@Value("${taskScheduler.defaultLockMaxDurationMinutes}")
private int lockMaxDuration;
@Bean
public LockProvider lockProvider(RedisConnectionFactory connectionFactory) {
return new RedisLockProvider(connectionFactory, ENV);
}
@Bean
public ScheduledLockConfiguration taskSchedulerLocker(LockProvider lockProvider) {
return ScheduledLockConfigurationBuilder
.withLockProvider(lockProvider)
.withPoolSize(tasksPoolSize)
.withDefaultLockAtMostFor(Duration.ofMinutes(lockMaxDuration))
.build();
}
The RedisTemplate provides convenient methods for saving and retrieving various collection types (hashes, lists, sets, etc.). We can add/get/delete from cache very easily as long as we define a collection and a key:
// add
template.opsForHash().put(collection, hkey, OBJECT_MAPPER.writeValueAsString(object));
// delete
template.opsForHash().delete(collection, hkey);
// get
OBJECT_MAPPER.readValue(
String.valueOf(template.opsForHash().get(collection, hkey)),
class
);
To check if a Redis connection is available:
template.getConnectionFactory().getConnection().ping() != null
And the repository class:
@Repository
@Slf4j
public class CacheRepository<T> implements DataCacheRepository<T> {
@Autowired
RedisTemplate template; // and we're in business
private static final ObjectMapper OBJECT_MAPPER;
private static final TimeZone DEFAULT_TIMEZONE = TimeZone.getTimeZone("UTC");
static {
OBJECT_MAPPER = new ObjectMapper();
OBJECT_MAPPER.setTimeZone(DEFAULT_TIMEZONE);
}
// implement methods
@Override
public boolean add(String collection, String hkey, T object) {
try {
String jsonObject = OBJECT_MAPPER.writeValueAsString(object);
template.opsForHash().put(collection, hkey, jsonObject);
return true;
} catch (Exception e) {
log.error("Unable to add object of key {} to cache collection '{}': {}",
hkey, collection, e.getMessage());
return false;
}
}
@Override
public boolean delete(String collection, String hkey) {
try {
template.opsForHash().delete(collection, hkey);
return true;
} catch (Exception e) {
log.error("Unable to delete entry {} from cache collection '{}': {}", hkey, collection, e.getMessage());
return false;
}
}
@Override
public T find(String collection, String hkey, Class<T> tClass) {
try {
String jsonObj = String.valueOf(template.opsForHash().get(collection, hkey));
return OBJECT_MAPPER.readValue(jsonObj, tClass);
} catch (Exception e) {
if(e.getMessage() == null){
log.error("Entry '{}' does not exist in cache", hkey);
} else {
log.error("Unable to find entry '{}' in cache collection '{}': {}", hkey, collection, e.getMessage());
}
return null;
}
}
@Override
public Boolean isAvailable() {
try{
String status = template.getConnectionFactory().getConnection().ping();
if (status != null) {
return true;
}
} catch (Exception e) {
log.warn("Redis server is not available at the moment.");
}
return false;
}
@Override
public Boolean isAvailable() {
try{
return template.getConnectionFactory().getConnection().ping() != null;
} catch (Exception e) {
log.warn("Redis server is not available at the moment.");
}
return false;
}
}
In this scenario, the autowired annotation will automatically load the Redis template bean we’ve created before. DataCacheRepository is an interface with simple add / find / delete methods. Instead of creating your own interface, you can also use a CRUD repository.
It’s straightforward to mock Redis calls when writing unit tests. For instance:
// you can also have this be false if you want to cover that case
when(potatoesRepository.isAvailable()).thenReturn(true);
// add method returns true if successful
when(potatoesRepository.addPotato(anyString(), any(Potato.class))).thenReturn(true);
// calls super class find method and returns null
when(potatoesRepository.findPotato(anyString())).thenReturn(null);
// test fail case
// ...
when(potatoesRepository.findPotato(anyString())).thenReturn(potato);
// test success case
//...
In this unitary test, we’re attempting to save an object to its respective collection on the Redis server. The key for said collection is a simple string. We trigger a fail case by returning a null object and a success case by returning a valid one. For the sake of exposition, we’ve included both cases on the same unitary test.
There are several approaches to invalidating cached data:
Defining a TTL (time-to-live) for saved data
Triggering a delete call when data changes on the provider side
Scheduling a method to clean certain collections periodically
Using LRU (Least Recently Used) eviction policies (Redis LRU Docs) https://redis.io/topics/lru-cache](https://redis.io/topics/lru-cache)
You should select the approach based on the data you’re handling.
For instance, if objects have a defined lifespan, defining a TTL and letting Redis invalidate them automatically is often best. For objects that change based on user interaction or microservice events, a delete trigger is more suitable.