Message Forwarding Mode Service Node Configuration
In the “Message Forwarding Mode”, since user access will be directly passed through CLB, it is necessary to ensure that the access address falls on the real service node in the backend, therefore, it is necessary to configure the internal/external network IP address of the load balancer in the backend service node. The configuration method is as follows.
Linux Configuration Method
Note:
The file names in the example commands below can be modified according to actual needs.
”$VIP” in the commands and scripts should be replaced with the actual VIP of the CLB.
If multiple EIPs are bound to the CLB, all of them need to be configured.
Operating System | If the cloud host has not used cloud init | If the cloud host uses cloud init |
---|---|---|
CentOS 7 and below | 1. Create a virtual network card configuration file: touch /etc/sysconfig/network-scripts/ifcfg-lo:1 2. Add the following configuration to /etc/sysconfig/network-scripts/ifcfg-lo:1: DEVICE=lo:1 IPADDR=$VIP NETMASK=255.255.255.255 3. Start the virtual network card: ifup lo:1 | Add the following content in UserData: UserData description#!/bin/bash touch /etc/sysconfig/network-scripts/ifcfg-lo:1 echo -e "DEVICE=lo:1\nIPADDR=$VIP\nNETMASK=255.255.255.255" > /etc/sysconfig/network-scripts/ifcfg-lo:1 ifup lo:1 |
CentOS 8 and above | 1. Install network-scripts: yum install network-scripts -y 2. Create a virtual network card configuration file: touch /etc/sysconfig/network-scripts/ifcfg-lo:1 3. Add the following configuration to /etc/sysconfig/network-scripts/ifcfg-lo:1: DEVICE=lo:1 IPADDR=$VIP NETMASK=255.255.255.255 4. Start the virtual network card ifup lo:1 | Add the following content in UserData: UserData description#!/bin/bash yum install network-scripts -y touch /etc/sysconfig/network-scripts/ifcfg-lo:1 echo -e "DEVICE=lo:1\nIPADDR=$VIP\nNETMASK=255.255.255.255" > /etc/sysconfig/network-scripts/ifcfg-lo:1 ifup lo:1 |
Ubuntu 16.04 | 1. Create a virtual network card configuration file:sudo touch /etc/network/interfaces.d/lo-cloud-init.cfg 2. Add the following configuration to /etc/network/interfaces.d/lo-cloud-init.cfg: auto lo:1 iface lo:1 inet static address $VIP netmask 255.255.255.255 3. Start the virtual network card sudo /etc/init.d/networking restart | Add the following content in UserData: UserData description#!/bin/bash sudo touch /etc/network/interfaces.d/lo-cloud-init.cfg sudo echo -e "auto lo:1\niface lo:1 inet static\naddress $VIP\nnetmask 255.255.255.255" > /etc/network/interfaces.d/lo-cloud-init.cfg sudo /etc/init.d/networking restart |
Ubuntu 18.04 Ubuntu 20.04 | 1. Create a virtual network card configuration file:sudo touch /etc/netplan/lo-cloud-init.yaml 2. Add the following configuration to /etc/netplan/lo-cloud-init.yaml (note the indentation on each line): network: ethernets: lo: addresses: - $VIP/32 3. Apply the configuration sudo netplan apply | Add the following content in UserData: UserData description (note the indentation on each line)#!/bin/bash sudo touch /etc/netplan/lo-cloud-init.yaml sudo echo -e "network:\n ethernets:\n lo:\n addresses:\n - $VIP/32" > /etc/netplan/lo-cloud-init.yaml sudo netplan apply |
Debian 10.0 | 1. Create a virtual network card configuration file:touch /etc/network/interfaces.d/lo-cloud-init 2. Add the following configuration to /etc/network/interfaces.d/lo-cloud-init: auto lo:1 iface lo:1 inet static address $VIP netmask 255.255.255.255 3. Start the virtual network card /etc/init.d/networking restart | Add the following content in UserData: UserData description#!/bin/bash touch /etc/network/interfaces.d/lo-cloud-init echo -e "auto lo:1\niface lo:1 inet static\naddress $VIP\nnetmask 255.255.255.255" > /etc/network/interfaces.d/lo-cloud-init /etc/init.d/networking restart |
Rocky Linux 8.5 | 1. Install network-scripts:yum install network-scripts -y 2. Create a virtual network card configuration file: touch /etc/sysconfig/network-scripts/ifcfg-lo:1 3. Add the following configuration to /etc/sysconfig/network-scripts/ifcfg-lo:1: DEVICE=lo:1 IPADDR=$VIP NETMASK=255.255.255.255 4. Start the virtual network card ifup lo:1 | Add the following content in UserData: UserData description#!/bin/bash yum install network-scripts -y touch /etc/sysconfig/network-scripts/ifcfg-lo:1 echo -e "DEVICE=lo:1\nIPADDR=$VIP\nNETMASK=255.255.255.255" > /etc/sysconfig/network-scripts/ifcfg-lo:1 ifup lo:1 |
Get Network Card VIP
For the internal network CLB, the $VIP here is the internal service IP address of the load balancer. For the external network CLB, it is the external service IP address of the load balancer (i.e., EIP). If you use automated script configuration, we recommend that you use APIs to get the VIP required for your configuration.
Windows Configuration Method
Step 1: Add lo Interface
In the “Device Manager”, select “Network Adapters” and click “Action”→“Add Legacy Hardware”→“Install the hardware that I manually select from a list” in the menu bar. Choose “Microsoft” as the manufacturer, select “Microsoft Loopback Adapter” as the network adapter, and click Next to complete the device creation.
Note: In Windows 8, Windows Server 2012 and newer versions, the “Microsoft Loopback Adapter” has been renamed to “Microsoft KM-TEST Loopback Adapter”.
Step 2: Configure lo Interface
For the internal network CLB, the IP of the lo interface is the internal service IP address of the load balancer. For the external network CLB, the IP of the lo interface is the external service IP address of the load balancer (i.e., EIP). Then, in the “Network and Sharing Center”, select Change adapter settings and configure the lo interface as shown in the picture:
Step 3: Activate lo Interface
Execute the following command in “cmd”, where $LOCAL represents the local interface name and $LO represents the loopback interface name.
@echo off
netsh interface ipv4 set interface "$LOCAL" weakhostreceive=enabled
netsh interface ipv4 set interface "$LOCAL" weakhostsend=enabled
netsh interface ipv4 set interface "$LO" weakhostreceive=enabled
netsh interface ipv4 set interface "$LO" weakhostsend=enabled
Pause
See the effect of the execution below.
It is recommended to log in via VNC for Windows system configuration. If the above operations do not take effect, you can restart the network card or service for checking after executing “netsh”. Essentially, as long as the VIP of the load balancer is configured on the backend service instances, no matter what operating system they are, it will work.