Technical Architecture
CLB supports both intranet and extranet scenarios, as well as request proxy and message forwarding modes. This article will introduce the basic architecture of CLB’s request proxy and message forwarding modes respectively.
Terms
UVER: {{channelName}} Virtual Edge Router, {{channelName}}‘s public network traffic forwarding center.
Message Forwarding
The message forwarding CLB is self-developed based on DPDK technology. It adopts a cluster deployment, with at least 4 servers in a single cluster (at least 2 servers in an overseas cluster), and achieves high availability through ECMP+ BGP.
Intranet
The message forwarding CLB uses a forwarding mode similar to DR. The architecture diagram of the intranet message forwarding is as follows:
The message forwarding CLB cluster declares the same VIP (Virtual IP) to the access switch connected to it, the access switch is configured with the ECMP algorithm, which can load balance traffic to multiple CLB servers, thus forming a message forwarding CLB cluster. When some servers in the CLB cluster experience forwarding exceptions, BGP message forwarding will also stop forwarding. Within three seconds, this CLB server will be kicked out of the server cluster to ensure high availability. At the same time, the CLB cluster health check module will also issue an alarm and notify the engineer to intervene. In addition, all servers in the same CLB cluster are distributed across availability zones to ensure cross-availability zone high availability. In the message-forwarding CLB, a specific module is responsible for health checking of backend nodes (currently only TCP/UDP port detection is supported), and reports the status of backend nodes. After the CLB forwarding server receives the business message from the Client, it will select one from the healthy back-end nodes, modify the destination mac, and tunnel it to the back-end node. In the process, the source and destination IPs of the message remain unchanged.
In the message forwarding mode, the backend nodes must bind the VIP (Virtual IP) address of the CLB on the LO port and listen to the service in order to correctly process the message and directly unicast the reply packet back to the Client. This is a typical DR process, so intranet message forwarding CLB can directly see the Client’s source IP.
Extranet
The schematic diagram of extranet message forwarding CLB forwarding is as follows:
Different from intranet message forwarding CLB, extranet message forwarding CLB traffic comes from the public network. Client access to CLB traffic enters {{channelName}} POP point, enters UVER ({{channelName}} Virtual Edge Router), UVER sends this part of traffic to each server in the CLB cluster according to the consistent hashing algorithm. The subsequent process is similar to intranet message forwarding CLB. The Backend node needs to bind the EIP of CLB to the LO port and listen to the service, and the return message will be directly sent to UVER and returned to the Client through the Internet.
In the extranet message forwarding CLB, the cluster health check module will periodically detect the survival status of the servers, if any server is found to be problematic, it will notify UVER to exclude the abnormal server to ensure high availability. The extranet message forwarding CLB cluster is also deployed across availability zones to ensure cross-availability zone high availability.
Request Proxy
The request proxy is based on Nginx development. It adopts a cluster deployment, with at least 4 servers in a single cluster (at least 2 servers in an overseas cluster).
Intranet
The architecture diagram of the intranet request proxy is as follows:
Different from the DR mode used by message forwarding CLB, request proxy CLB uses the Proxy mode (i.e. Fullnat mode). After receiving the Client’s request, the intranet request proxy CLB transforms the client connection to CLB IP into a connection from CLB’s proxy IP to the Backend’s real IP. Therefore, the Backend (service node) cannot directly see the Client ip, and can only obtain it through X-Forwarded-For (HTTP mode). In addition, the node health check module is integrated into the CLB process, so there is no need for an additional node health check module.
Intranet request proxy CLB leverages ECMP+ BGP to achieve high availability. The CLB server establishes a BGP connection with the uplink switch through Quagga. Multiple servers in the same cluster will declare the same VIP (Virtual IP) to the uplink switch. The uplink switch distributes the traffic to the servers in the cluster evenly based on the ECMP algorithm. When a server fails, BGP will interrupt within three seconds, thereby removing the faulty server from the cluster, ensuring that the service can still function normally.
Extranet
The architecture diagram of the extranet request proxy is as follows:
Different from intranet request proxy CLB, extranet traffic comes from the public network. Client’s traffic to request proxy CLB enters {{channelName}} POP point, then enters UVER ({{channelName}} Virtual Edge Router), and UVER sends this part of traffic to each server in the CLB cluster according to the consistent hashing algorithm. The subsequent process is similar to the intranet request proxy CLB.
In the extranet request proxy CLB, the cluster health check module will periodically detect the survival status of the servers. If any server is found to be problematic, it will notify UVER to exclude the abnormal server to ensure high availability. The extranet message forwarding CLB cluster is also deployed across availability zones to ensure cross-availability zone high availability.
Mode Comparison
Compared to request proxy CLB, message forwarding CLB has stronger forwarding ability, suitable for scenarios requiring high forwarding performance. Request proxy CLB, on the other hand, can process layer seven data, perform SSL offloading, execute domain forwarding, path forwarding, etc., and the backend nodes do not need to additionally configure VIP (Virtual IP).