Docs
uk8s
Cluster Network
Underlay Network Based on UVPC
Network Isolation Strategy - NetworkPolicy

Network Isolation Strategy - NetworkPolicy

In UK8S cluster, by default, all Pods are interconnected, i.e., any Pod can both receive requests sent from any Pod in the cluster and send requests to any Pod in the cluster.

However, in practical business scenarios, network isolation is essential for business security. This article introduces how to implement network isolation in UK8S.

Pre-installation Check

⚠️ Before installing the Calico network isolation plugin, please make sure that the CNI version is greater than or equal to 19.12.1, otherwise it will delete the original network configuration on Node and cause the Pod network to be unavailable. For CNI version check and upgrade, please refer to: CNI Network Plugin Upgrade.

Kubernetes version <=1.24.12, and >= 1.16.4, and the cluster needs to access external network to pull images outside of Uhub.

Confirm whether the ipamd component is used in the cluster:

kubectl -n kube-system get ds cni-vpc-ipamd

If not used, you can ignore the following check; If ipamd is used, confirm if ipamd has enabled Calico network policy support; Use the following command to view whether the parameter --calicoPolicyFlag is true:

kubectl -n kube-system get ds cni-vpc-ipamd -o=jsonpath='{.spec.template.spec.containers[0].args}{"\t"}{"\n"}'

["--availablePodIPLowWatermark=3","--availablePodIPHighWatermark=50","--calicoPolicyFlag=true","--cooldownPeriodSeconds=30"]

If the parameter is not true, use the following command to enable it:

kubectl -n kube-system patch ds cni-vpc-ipamd -p '{"spec":{"template":{"spec":{"containers":[{"name":"cni-vpc-ipamd","args":["--availablePodIPLowWatermark=3","--availablePodIPHighWatermark=50","--calicoPolicyFlag=true","--cooldownPeriodSeconds=30"]}]}}}}'

1. Plugin Installation

To implement network isolation in UK8S, it is necessary to deploy Felix and Typha components of Calico. The component modules have been containerized and can be installed directly in UK8S via kubectl command.

kubectl apply -f https://docs.ucloud-global.com/uk8s/yaml/policy_calico-policy-only.yaml

2. NetworkPolicy Rule Analysis

After installing the network isolation policy components of Calico, we can create NetworkPolicy objects in UK8S for access control of Pods, as shown below.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978

Below is a brief description of the function of each parameter:

  • spec.podSelector: This parameter is used to determine the scope of this NetworkPolicy, i.e., which Pods are valid. The above example indicates that this is effective for pods labeled with role=db in the default namespace. Here, it should be noted that NetworkPolicy is a resource object at the namespace level.

  • spec.ingress.from: Inbound access control, i.e., accepting requests from which source. IP, namespace, and pod are three supported control modes. The above example allows requests from source address of 172.17/16 excluding 172.17.1/24, or any pod in a namespace labeled with projcet=myproject, or pods in Default namespace labeled with role=frontend.

  • spec.ingress.ports: Declare the opened access ports, if not filled in, all are opened by default. The above example indicates that only port 6379 is allowed to be accessed. And with from, forms a logical AND operation, i.e., it only allows the source that is allowed under the above from rule to access port 6379 (TCP).

  • spec.egress: Declare the allowed destination addresses, similar to from. The above example indicates that only requests for IP address in the 10.0.0.0/24 subnet are permitted, and only port 5978 (TCP) of that network segment address is allowed to be accessed.

Through the above description, we should understand that NetworkPolicy is a whitelist mechanism, i.e., once NetworkPolicy is enabled, all are denied unless explicitly stated.

3. Examples

3.1 Limiting a group of Pods to only access resources within the VPC (Cannot access external network)

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: pod-egress-policy
spec:
  podSelector:
    matchLabels:
      pod: internal-only
  egress:
  - to:
    - ipBlock:
        cidr: 10.9.0.0/16

3.2 Limiting the source IP of the Service exposed to the public network

First, create an application that exposes services to the public network via external network ULB4

apiVersion: v1
kind: Service
metadata: 
  name: {{channelName}}-nginx
  labels:
    app: {{channelName}}-nginx
spec: 
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports: 
    - protocol: TCP
      port: 80
  selector:
    app: {{channelName}}-nginx
---
apiVersion: v1
kind: Pod
metadata:
  name: test-nginx
  labels:
    app: {{channelName}}-nginx
spec:
  containers:
  - name: nginx
    image: uhub.ucloud-global.com/ucloud/nginx:1.9.2
    ports:
    - containerPort: 80

After the above application is created, we can directly access the application through the external network ULB IP. Now we set that the application is only accessible from the office environment. The office exit IP is assumed to be 106.10.10.10.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: access-nginx
spec:
  podSelector:
    matchLabels:
      app: {{channelName}}-nginx
  ingress:
  - from:
    - ipBlock:
        cidr: 106.10.10.10/24  # When validating, please change to your client's exit IP.
    - ipBlock:
        cidr: 10.23.248.0/21  # Regional public service segment, otherwise the ULB health check will fail and the isolation policy will not take effect, see below

4. Allow VPC Public Service Segment

The public service segment is mainly used for internal network DNS, ULB health checks etc. When configuring NetworkPolicy, it is recommended to allow the public service segment in all regions.

For the public service segments in each region, please refer to the VPC documentation: VPC Segment Usage Restrictions