Nuxtstop

For all things nuxt.js

Pipy + Redis + Sentinel = High available Redis

Pipy + Redis + Sentinel = High available Redis
14 0

Pipy is an open source programmable proxy for cloud, edge, and IoT. Pipy versatile nature allow it be used in multiple use cases. The main purpose of this article is to demonstrate the implementation of Pipy as TCP load balancer in front of a Redis master and slave (agent) replication to ensure that client and application connections are always entertained.

Our goals

  • We want to implement and achieve automatic failover capability in cases when designated Redis master node stop responding because of process termination, hardware failure, network hiccups or some uknown reasons, and Sentinel automatically fails over the master to one of the remaining agent nodes.
  • We want clients and application connecting to Redis are not impacted by failover mechanism.
  • We want whole failover mechanism to be fully transparent without requiring any change in client and applications configuration/settings.
  • We want all write (Redis SET or other user configured commands) requests are handled by master node without client and application knowing the topology in place.
  • We want all our requests are distributed equally to available nodes for making better use of allocated resources and high throuput.
  • ....

Load balancing, proxying TCP connections, ensuring SET (or other user configured) redis commands are always routed to master node, health probing, logging to console, recording metrics etc is achieved by Pipy.

High availability for Redis deployment including master and agent implementation is achieved by implementing Redis Sentinel, which continuously performs monitoring, notification, and automatic failover if a master becomes unresponsive.

Quick introduction to technologies used

  • Pipy is an open source programmable proxy for cloud, edge, and IoT.
  • Redis is an open source, in-memory data store, benchmarked as the world’s fastest. Redis powers cutting edge applications and is known to enhance use cases such as real-time analytics, fast high-volume transactions, in-app social functionality, application job management, queuing and caching.
  • Redis Sentinel provides high availability for Redis when not using Redis Cluster.

Demo high-level deployment architecture

high-level deployment architecture

  • One Pipy proxy server to act as a TCP load balancer and keeping track of which Redis servers are available and ready to serve requests.
  • Three Redis servers provide the master-agent replication.
  • Three Sentinel instances for a robust deployment.

Setup

For demonstration purposes, demo code comes with Docker Compose script to spin up containers.

Full source of demo is available and can be downloaded from https://github.com/flomesh-io/pipy-demos/tree/main/pipy-redis-sentinel

Prerequisites

  1. Clone Pipy Demos Repo or download demo code from pipy-redis-sentinel.
  2. Spin up Pipy proxy, Redis, Sentinel containers

    $ docker-compose up -d
    

Testing

  1. Make sure all containers are up and running:
$ docker ps

CONTAINER ID   IMAGE                       COMMAND                  CREATED         STATUS         PORTS                                                 NAMES
0db7eba4f91d   sentinel                    "sentinel-entrypoint…"   6 seconds ago   Up 6 seconds   6379/tcp, 26379/tcp                                   redis_sentinel_3
e4a7b6b99074   sentinel                    "sentinel-entrypoint…"   6 seconds ago   Up 6 seconds   6379/tcp, 26379/tcp                                   redis_sentinel_2
4ec87846b1e3   naqvis/pipy-pjs:0.22.0-31   "/docker-entrypoint.…"   6 seconds ago   Up 5 seconds   6000/tcp, 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp   pipy-proxy
8e81ddc5eb07   sentinel                    "sentinel-entrypoint…"   6 seconds ago   Up 4 seconds   6379/tcp, 26379/tcp                                   redis_sentinel_1
f1a533de6d41   redis:alpine                "docker-entrypoint.s…"   8 seconds ago   Up 6 seconds   6379/tcp                                              redis-slave2
a522c208b236   redis:alpine                "docker-entrypoint.s…"   8 seconds ago   Up 7 seconds   6379/tcp                                              redis-slave1
8065dec93c3d   redis:alpine                "docker-entrypoint.s…"   8 seconds ago   Up 7 seconds   6379/tcp                                              redis-master
Enter fullscreen mode Exit fullscreen mode

Pipy is listening on TCP port 6379 on the docker host which is the standard Redis port and load balance across the 3 Redis containers (one Master & two Slave). Three Redis Sentinel contaners for a robust deployment and providing high availability for Redis.

  1. Check that Pipy service is routing traffic to all Redis nodes by executing below commands. You don't need to provide docker host IP and port information, as Pipy is exposing Redis default port of 6379.
$ redis-cli info replication | grep role
role:master
$ redis-cli info replication | grep role
role:slave
$ redis-cli info replication | grep role
role:slave
Enter fullscreen mode Exit fullscreen mode

Pipy comes with load balancing algorithms and demo script is configured to use RoundRobinLoadBalancer and you can see from above output that Pipy is sending requests to all configured Redis nodes in roundrobin fashion.

  1. Now try some more Redis commands
$ redis-cli set hello world
OK

$ redis-cli get hello
"world"

$ redis-cli set foo bar
OK
Enter fullscreen mode Exit fullscreen mode

We have configured proxy to use roundrobin algorithms, but we have seen that all SET requests are executed successfully. If we have simply followed the roundrobin algorithm to forward requests equally to each nodes, then our SET requests reaching out to slave nodes would have failed with error like:

(error) READONLY You can't write against a read only slave.
Enter fullscreen mode Exit fullscreen mode

Failover Testing

  1. Let's pause redis-master container to test automatic failover
$ docker pause redis-master
Enter fullscreen mode Exit fullscreen mode

Sentinel will automatically detect that the master is missing, and it will choose a slave to promote that as master. Pipy during health-checks will detect that master node is down, it will mark that as unhealthy and will no longer send requests to that node until node become accessible again. Pipy will detect new master node and will forward all SET commands to newly promoted master node.

  1. Run again some Redis commands to see if can access Redis without any problems.
$ redis-cli set abc 1234
OK

$ redis-cli get abc
"1234"
Enter fullscreen mode Exit fullscreen mode
  1. Bring back pause container, and Sentinel will mark that as slave node
$ docker unpause redis-master

Enter fullscreen mode Exit fullscreen mode

That's it, You should now have everything You need to setup a fault tolerant and high available Redis when not using Redis Cluster.

Performance test

We performed various testes with pipy, haproxy, twemproxy, and single redis instance to find out the drop in through by applying proxy before redis.
following are the test result performed on our VM with 4C8G with OS 20.04.4 LTS (Focal Fossa) ARM64 running on our Mac M1 Max.

Benchmark

It is clear that adding any proxy before Redis is going to cause performance penalty, but that's the cost which one had to pay to achieve the benefits proxy brings.

Conclusion

You can achieve Redis high availability and maximum protection against any failure and disaster for Redis replication by combining the infrastructure services with the software configuration. With propery infrastructure setup and design, we can achieve high availability for Redis replication between master and slave by using different availability zones. Redis Sentinel monitors and performs the failover of Redis replication from master to slave.

Pipy is an open-source, extremely fast, and lightweight network traffic processor which can be used in a variety of use cases ranging from edge routers, load balancing & proxying (forward/reverse), API gateways, Static HTTP Servers, Service mesh sidecars, and many other applications. Pipy is in active development and maintained by full-time committers and contributors, though still an early version, it has been battle-tested and in production use by several commercial clients.