Skip to Content
Cluster MonitoringPrometheus monitoring solutionDeploy Prometheus

Deploy Prometheus


⚠️ This document is only applicable for the reference of cluster deployment version 1.22 and below. For clusters with 1.22 and above, please use through the console-cluster details-monitoring center, refer to: Monitoring Center Overview.

Introduction

For a Kubernetes cluster, the objects that need to be monitored can be divided into the following categories:

  • Kubernetes System Components: Kubernetes built-in system components generally include apiserver, controller-manager, etcd, kubelet, etc. In order to ensure the normal operation of the cluster, we need to know their current operating status in real time.

  • Underlying Infrastructure: Resource status, kernel events, etc., of Node nodes (virtual machines or physical machines).

  • Kubernetes Objects: Mainly the workload objects in Kubernetes, such as Deployment, DaemonSet, Pod, etc.

  • Application Indicators: Data indicators of concern within the application, such as httpRequest.

Deploy Prometheus

When deploying Prometheus in Kubernetes, in addition to manual methods, CoreOS has open-sourced the Prometheus-Operator and kube-Prometheus projects, making it extremely simple to install and deploy Prometheus in K8S. Below we will introduce how to deploy Kube-Prometheus in UK8S.

1、About Prometheus-Operator

The true nature of the Prometheus-operator is a set of user-defined CRD resources and the implementation of the Controller. Prometheus Operator’s Controller is responsible for listening to these custom resource changes under BRAC authority, and automates tasks such as Prometheus Server’s self and configuration management based on these resource definitions.

In K8S, the basic smallest unit of monitoring metrics is a group of pods behind a Service, corresponding to the target in Prometheus. So prometheus-operator abstracted the corresponding CRD type “ServiceMonitor”, this ServiceMonitor uses query the corresponding Service and its Pods or endpoints behind it by sepc.selector.labes, and specify the Metrics url path through sepc.endpoint. Take CoreDNS below as an example, the Namespace of the Target object to be pulled is kube-system, and kube-app is their labels, port is metrics.

apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: coredns name: coredns namespace: monitoring spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 15s port: metrics jobLabel: k8s-app namespaceSelector: matchNames: - kube-system selector: matchLabels: k8s-app: kube-dns

2、Preparatory Work

SSH to any Master node and clone the kube-prometheus project. This project originates from the CoreOS open-source kube-prometheus, compared with the original project, mainly as the following optimizations:

  • Change the data storage medium of Prometheus and AlbertManager from emptyDir to UDisk to enhance stability and avoid data loss;
  • Unify the image source to UHub, to avoid the appearance of image pull failure;
  • Add a dedicated UK8S directory for configuring controller-manager, scheduler, etcd;
  • Divide the executable files by directory for easy modification and reading.
yum install git -y git clone --depth=1 -b kube-prometheus https://github.com/ucloud/uk8s-demo.git

3、Modify UK8S Specific File Configuration Parameters

In the manifests directory, there is a UK8S directory. These configuration files are mainly used to manually create endpoints and svcs for the controller-manager, scheduler, and etcd in UK8S, so that Prometheus The Server can collect the monitoring data of these three components through ServiceMonitor.

cd /uk8s-demo/manifests/uk8s # Modify the following two files, replace the IP in them with your own UK8S Master node's private IP. vi controllerManagerAndScheduler_ep.yaml vi etcd_ep.yaml

4、Note

The above mentioned to modify controllerManagerAndScheduler_ep.yaml and etcd_ep.yaml these two files, here explain the reason. Since UK8S’s ETCD, Scheduler, Controller-Manager are all deployed through binaries, in order to achieve Metrics fetching through configuration “ServiceMonitor”, we must create a SVC object for it in K8S. However, since these three components are not Pods, so we need to manually create Endpoints for them.

apiVersion: v1 kind: Endpoints metadata: labels: k8s-app: etcd name: etcd namespace: kube-system subsets: - addresses: - ip: 10.7.35.44 # Replace with the private IP of the master node nodeName: etc-master2 ports: - name: port port: 2379 protocol: TCP - addresses: - ip: 10.7.163.60 # Ditto nodeName: etc-master1 ports: - name: port port: 2379 protocol: TCP - addresses: - ip: 10.7.142.140 # Ditto nodeName: etc-master3 ports: - name: port port: 2379 protocol: TCP

5、Deploy Prometheus Operator

First create a namespace named monitor. After the monitor is successfully created, deploy the Operator directly. Prometheus The Operateor will start as a Deployment and will create several CRD objects mentioned earlier.

# Create Namespace kubectl apply -f 00namespace-namespace.yaml # Create Secret to give Prometheus Server to fetch ETCD data kubectl -n monitoring create secret generic etcd-certs --from-file=/etc/kubernetes/ssl/ca.pem --from-file=/etc/kubernetes/ssl/etcd.pem --from-file=/etc/kubernetes/ssl/etcd-key.pem # Create Operator kubectl apply -f operator/ # View operator startup state kubectl get po -n monitoring # View CRD kubectl get crd -n monitoring

6、Deploy Complete CRD Set

The more critical ones include Prometheus Server, Grafana, AlertManager, ServiceMonitor, Node-Exporter, etc. All these images have been modified to UHub official images, so the pull speed is relatively fast.

kubectl apply -f adapter/ kubectl apply -f alertmanager/ kubectl apply -f node-exporter/ kubectl apply -f kube-state-metrics/ kubectl apply -f grafana/ kubectl apply -f prometheus/ kubectl apply -f serviceMonitor/ kubectl apply -f uk8s/

We can use the following command to check the application pull status.

kubectl -n monitoring get po

Since all types of SVC are ClusterIP by default, we change it to LoadBalancer for convenience of demonstration.

kubectl edit svc/prometheus-k8s -n monitoring # Modify to type: LoadBalancer [root@10-9-52-233 manifests]# kubectl get svc -n monitoring # Get Prometheus Server's EXTERNAL-IP and port

You can see that the monitoring indicators of all K8S components have been obtained.

7、Monitor Application Indicators

Let’s first deploy a set of Pods and SVC. The main process inside this image will output metrics information on port 8080.

apiVersion: apps/v1 kind: Deployment metadata: name: example-app spec: replicas: 3 selector: matchLabels: app: example-app template: metadata: labels: app: example-app spec: containers: - name: example-app image: uhub.ucloud-global.com/uk8s_public/instrumented_app:latest ports: - name: web containerPort: 8080 --- kind: Service apiVersion: v1 metadata: name: example-app labels: app: example-app spec: selector: app: example-app ports: - name: web port: 8080

Then create a ServiceMonitor to tell prometheus server that it needs to monitor a group of pod metrics behind the svc labeled as app: example-app.

apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: example-app labels: team: frontend spec: selector: matchLabels: app: example-app endpoints: - port: web

Open the browser to visit Prometheus Server, enter target to find it has been listened to, and the corresponding config has configuration generation and import.

8、Note

This document is only applicable to kubernetes Versions 1.14 and above, if your kubernetes version is below 1.14, you can use release-0.1.