Docs
uk8s
Service Discovery
Ingress Management
Nginx Ingress

Nginx Ingress

This article applies to K8S version 1.19+

What is Ingress

Ingress is an entry point for accessing services inside the Kubernetes cluster from the outside, providing load balancing capabilities at the seventh layer for services inside the cluster.

Under normal circumstances, Services and Pods can only be accessed within the cluster via an IP address, and all traffic reaching the cluster boundary is either discarded or forwarded elsewhere. In the previous service chapter, we provided a method for creating LoadBalancer type Services. With the help of the extended interface provided by Kubernetes, UK8S will create a load balancing service (ULB) corresponding to this Service to undertake external traffic and rout it to the inside of the cluster. However, in scenarios such as microservices, one Service corresponds to one load balancer, the management cost is obviously too high, so Ingress came into being.

We can understand Ingress as a “Service” that provides Service capabilities, providing load balancing services that proxy different backend Services. We can configure URLs for external access, load balancing, SSL, name-based virtual hosts, etc., in Ingress.

Below we will understand the use of Ingress by deploying the Nginx Ingress Controller in UK8S.

I. Deployment of Ingress Controller

In order for Ingress to work properly, an Ingress Controller must be deployed in the cluster. Unlike other types of controllers, other controllers such as Deployment usually run automatically when the cluster starts as part of the kube-controller-manager binary file. The Ingress Controller, on the other hand, needs to be deployed by itself. The Kubernetes community offers the following Ingress Controllers to choose from, including:

  1. Nginx
  2. HAProxy
  3. Envoy
  4. Traefik

Here we choose Nginx as the Ingress Controller. It’s very easy to deploy the Nginx Ingress Controller - just execute the following assignment.

kubectl apply -f https://docs.ucloud-global.com/uk8s/yaml/ingress_nginx/mandatory_1.19.yaml

This file, mandatory_1.19.yaml, defines the Ingress Controller. We can download the yaml file to study it carefully. Here’s a brief rundown of what some of the yaml fields mean.

This yaml defines a pod replica set using ingress-nginx-controller as the image. The main function of this Pod is to listen for changes to the Ingress object and the backend Service it proxies. When a new Ingress object is created by the user, the controller generates a corresponding Nginx configuration file (i.e., the familiar /etc/nginx/nginx.conf) based on the content defined by the Ingress object and uses this configuration file to start an Nginx service. If the Ingress object is updated, the controller updates this configuration file. Note that if only the proxied Service object is updated, the nginx service managed by the controller does not need to be reloaded. This is because ingress-nginx-controller has implemented dynamic configuration of upstream via nginx lua.

In addition, this yaml file defines ConfigMap, which ingress-nginx-controller can use to customize the Nginx configuration file, as shown in the example below:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.2.1
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: 'true'
  map-hash-bucket-size: "128"
  ssl-protocols: SSLv2

Note that in ConfigMap, the key and value only support strings, so values like integers need to be wrapped in double quotes, for example “100”. Detailed information can be found at Nginx-Ingress-ConfigMap.

Essentially, this ingress-nginx-controller is an Nginx load balancer that updates itself according to changes to the Ingress object and the backend Service being proxied.

The container uses UTC by default.

II. External access to nginx ingress

Above, we have deployed a nginx ingress controller in UK8S, and in order to be externally accessible, we have also created an externally accessible LoadBalancer type Service, as shown below.

apiVersion: v1
kind: Service
metadata:
  annotations:
    "service.beta.kubernetes.io/ucloud-load-balancer-type": "inner"
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.2.1
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller

The sole job of this Service is to expose the 80 and 433 ports of all Pods with the ingress-nginx label. We can get the external access entry of this Service in the following way.

It’s important to note that the ULB created by LoadBalancer in the sample is in internal network mode. If you want to create an ULB for the external network, you need to change the Service’s metadata.annotations."service.beta.kubernetes.io/ucloud-load-balancer-type" to “outer”. For more parameters please read the official documentation - ULB Parameter Description.

$ kubectl get svc -n ingress-nginx
NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   172.30.48.77   xxx.yy.xxx.yy   80:30052/TCP,443:31285/TCP   14m
......

After deploying the Ingress Controller and its required Service, we can now use it to proxy other Services inside the cluster.

III. Create two applications

In the yaml below, we define 2 applications named echo-nginx, mainly to output some global variables of the nginx application itself.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-app-1
  labels:
    app:  demo-app-1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demo-app-1
  template:
    metadata:
      labels:
        app: demo-app-1
    spec:
      containers:
        - name: demo-app-1
          image: uhub.ucloud-global.com/jenkins_k8s_cicd/echo_nginx:v11
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: demo-app-1-svc
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: http
  selector:
    app: demo-app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-app-2
  labels:
    app:  demo-app-2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: demo-app-2
  template:
    metadata:
      labels:
        app: demo-app-2
    spec:
      containers:
        - name: demo-app-2
          image: uhub.ucloud-global.com/jenkins_k8s_cicd/echo_nginx:v11
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: demo-app-2-svc
  labels:
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: http
  selector:
    app: demo-app-2

We will save the above yaml as demo-app.yaml and create an application with the following command.

kubectl apply -f demo-app.yaml

IV. Defining Ingress Objects

We have deployed the nginx ingress controller and exposed it to the outside network, and created two applications. Next, we can define an ingress object to proxy the two applications out of the cluster.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-app-ingress
spec:
  ingressClassName: nginx
  defaultBackend:
    service:
      name: demo-app-1-svc
      port:
        number: 80
  rules:
    - host: demo-app.example.com
      http:
        paths:
          - path: /demo-app-1
            pathType: Prefix
            backend:
              service:
                name: demo-app-1-svc
                port:
                  number: 80
          - path: /demo-app-2
            pathType: Prefix
            backend:
              service:
                name: demo-app-2-svc
                port:
                  number: 80

The above yaml file defines an Ingress object, where ingress.spec.rules are the proxy rule sets for ingress.

First, let’s look at the host field. Its value must be a standard domain name format string and cannot be an IP address. The value defined by the host field is the entry of this Ingress. As such, when a user accesses demo-app.example.com, he or she is actually accessing this Ingress object. This enables Kubernetes to use IngressRule to forward your request to the next step.

Next is the path field. You can simply think of each path here as corresponding to a backend Service. So in our example, there are two paths defined, they correspond to demo-app-1 and demo-app-2 respectively, these two Deployment Services (i.e., demo-app-1-svc and demo-app-2-svc).

Each HTTP rule contains the following information: a host configuration item (e.g., demo-app.example.com), a list of paths (e.g., /demo-app-1 and /demo-app-2), each path is associated with a backend (e.g., the 80 port of demo-app-1-svc). Before the LoadBalancer forwards traffic to the backend, all inbound requests must first match the host and path.

When no host and path from a rule match, the default backend defined in ingress.spec.defaultBackend will take effect, redirecting traffic not matching host and path to the defaultBackend for handling.

We save the above yaml as ingress.yaml and create an ingress object directly with the following command:

kubectl apply -f ingress.yaml

Next, we can look at this ingress object

$ kubectl get ingresses.networking.k8s.io
NAME               CLASS    HOSTS                  ADDRESS   PORTS   AGE
demo-app-ingress   <none>   demo-app.example.com             80      5m39s
 
$ kubectl describe ingresses.networking.k8s.io demo-app-ingress
Name:             demo-app-ingress
Labels:           <none>
Namespace:        ingress-nginx
Address:
Default backend: demo-app-1-svc:80 (172.20.145.127:80,172.20.187.251:80)
Rules:
  Host                  Path  Backends
  ----                  ----  --------
  demo-app.example.com
                        /demo-app-1   demo-app-1-svc:80 (172.20.145.127:80,172.20.187.251:80)
                        /demo-app-2   demo-app-2-svc:80 (172.20.40.180:80,172.20.43.118:80,172.20.62.63:80)
Annotations:            <none>
Events:                 <none>

From the rule rules we can see that the Host we defined is demo-app.example.com and it has two forwarding rules (Path), each forwarded to demo-app-1-svc and demo-app-2-svc.

Of course, in the Ingress yaml file, you can also define multiple Hosts to provide load balancing services for more domain names.

Next, we can access the applications we deployed earlier by visiting the address and port of this Ingress. For example, when we visit http://demo-app.example.com/demo-app-2, it should be demo-app-2’s Deployment that responds to my request. When creating LoadBalance, if you choose external network mode and bind EIP, you can directly access through the external network. We can add /etc/hosts locally to directly access via domain name from outside network. If it is in internal network mode, only resources within the VPC can access through Ingress.

$ cat /etc/hosts
......
 
xxx.yy.xxx.yy demo-app.example.com
$ curl http://demo-app.example.com/demo-app-2
Scheme: http
Server address: 172.20.43.118:80
Server name: demo-app-2-5f6c5df698-rvsmc
Date: 12/Jan/2022:02:56:38 +0000
URI: /demo-app-2
Request ID: ba34c07f5cc78e74629041df5568977a

V. TLS Support

In the above ingress object, we did not specify a TLS certificate for the Host. The ingress controller supports encryption of the site by specifying a secret containing the TLS private key and certificate.

First, let’s create a secret containing tls.crt and tls.key. When generating certificates, make sure the CN contains demo-app.example.com and base64 encode the certificate content.

$ HOST=demo-app.example.com
 
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=${HOST}/O=${HOST}"
Generating a RSA private key
................................+++++
................................+++++
writing new private key to 'tls.key'
$ kubectl create secret tls demo-app-tls --key tls.key --cert tls.crt
secret/demo-app-tls created
 
$ kubectl describe secret demo-app-tls
Name:         demo-app-tls
Namespace:    ingress-nginx
Labels:       <none>
Annotations:  <none>
 
Type:  kubernetes.io/tls
 
Data
====
tls.crt:  1229 bytes
tls.key:  1708 bytes

Then in the ingress object, reference the secret via the ingress.spec.tls field and the ingress controller will encrypt it to protect the communication pipeline between the client and the ingress.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-app-ingress
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - demo-app.example.com
      secretName: demo-app-tls
  defaultBackend:
    service:
      name: demo-app-1-svc
      port:
        number: 80
  rules:
    - host: demo-app.example.com
      http:
        paths:
          - path: /demo-app-1
            pathType: Prefix
            backend:
              service:
                name: demo-app-1-svc
                port:
                  number: 80
          - path: /demo-app-2
            pathType: Prefix
            backend:
              service:
                name: demo-app-2-svc
                port:
                  number: 80

At this point, you can use the https protocol to access the ingress, but keep in mind that the certificate for the ingress is self-signed by us, so when using tools like curl to access, you need to skip the certificate check. In a production environment, it is recommended to use a TLS certificate that has been signed by a CA.

$ curl --insecure https://demo-app.example.com/demo-app-2
Scheme: http
Server address: 172.20.40.180:80
Server name: demo-app-2-5f6c5df698-pvr45
Date: 12/Jan/2022:03:19:58 +0000
URI: /demo-app-2
Request ID: 4cc35ddb1a301977f0477ded4c09d5df

VI. Setting Access Whitelist

In some scenarios, our business only allows specified IP addresses to access. This can be achieved by adding annotations, i.e., nginx.ingress.kubernetes.io/whitelist-source-range. The value is a list of CIDRs separated by a comma ”,“. Example as follows:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/whitelist-source-range: 172.16.0.0/16,172.18.0.0/16
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - demo-app.example.com
      secretName: demo-app-tls
  defaultBackend:
    service:
      name: demo-app-1-svc
      port:
        number: 80
  rules:
    - host: demo-app.example.com
      http:
        paths:
          - path: /demo-app-1
            pathType: Prefix
            backend:
              service:
                name: demo-app-1-svc
                port:
                  number: 80
          - path: /demo-app-2
            pathType: Prefix
            backend:
              service:
                name: demo-app-2-svc
                port:
                  number: 80