Cluster Node Configuration Recommendation
1. Master Configuration Recommendation
The specification of the Master is related to the scale of the cluster. The larger the scale of the cluster, the higher the specification of the Master required. The recommended Master node configuration for different cluster scales is as follows:
Node Scale | Master Specification |
---|---|
1-10 nodes | >=2 cores 4G |
10-100 nodes | >=4 cores 8G |
100-250 nodes | >=8 cores 16G |
250-500 nodes | >=16 cores 32G |
500-1000 nodes | >=32 cores 64G |
Over 1000 nodes | Contact us |
The default size of the UK8S Master node system disk is 40G (minimum), which is used to store ETCD information and related configuration files.
If the cluster scale is raised and there is a demand to upgrade the Master node configuration, please change the configuration one by one on the cloud server node management page, Each one needs to be upgraded to the same configuration.
Before upgrading to the next Master node, make sure the other two Master nodes are in the Ready state and the status of all the Kubernetes-related core components on the Master node are in the active state. For troubleshooting methods of the Master node core components, please refer to: Common Fault Handling of Node
2. How to Choose the Size of Node Configuration
UK8S cluster requires the Node configuration to be no less than 2C4G, the default system disk is 40G (minimum), which is used for storing related configuration files and so on.
About Node Node Resource Reservation Strategy
Before determining the configuration of the UK8S Node node, you need to know that, UK8S Node defaultly reserves 1G memory, 0.2 core CPU to ensure the stable operation of the system. These reserved resources are used for system and Kubernetes related service processes.
And when the available memory is less than 5%, it will start to evict according to the priority of pod resources. The memory effectively available for pod is approximately {Memeroy of Node}-1G-5% (for example: if memory is 4G, approximately 2.8G is available for pod)And the number of Pods that can be created on a single node is related to CPU cores. The number of Pods that can be created on a single node = CPU cores x 8 (for example: 2 cores support up to 16 pods, 4 cores support up to 32 pods).
Therefore, we recommend that the configuration of Node >= 2C4G. This is the basic configuration to ensure the normal operation of the cluster.
UK8S Node instances utilize the system disk for storage, but you can optionally mount a data disk during node creation (or later on the host). If a data disk is mounted, it will be used for storing Docker images locally. Otherwise, Docker images and other data will be stored on the system disk. Ensure sufficient disk space on the chosen storage location to prevent automatic cleanup of images or Pods due to insufficient space.
Production Environment Node Configuration Option Recommendation
When configuring Nodes for your UK8S production environment, several factors come into play, including the overall CPU core requirements of your cluster, your desired fault tolerance, and the specific types of workloads you’ll be running.
CPU Core Allocation
- Balancing Fault Tolerance and Resource Efficiency: A good starting point is to aim for a CPU core range of 4 to 32 cores per Node. This approach balances fault tolerance with efficient resource utilization.
- Determining Node Count: The number of Nodes you choose depends on your cluster’s total CPU core needs and your desired fault tolerance. For example, if your cluster requires 240 cores and you can tolerate a 10% failure rate, you might opt for 10 Nodes with 24 cores each. For higher fault tolerance (e.g., less than 5%), consider a larger number of Nodes with fewer cores.
- Avoiding Extreme Configurations: It’s best to avoid extremely small Node configurations (e.g., 2 cores). These can lead to inefficient resource allocation and make it challenging to schedule Pods that require more resources.
CPU:Memory Ratio
- Optimizing for Workload Type: Choose Node configurations that match the CPU and memory requirements of your applications. For CPU-intensive workloads, consider a 1:2 CPU:Memory ratio. For Java applications, a 1:4 or 1:8 ratio might be suitable.
- Utilizing Node Affinity: For mixed workloads, label your Nodes and use node affinity to ensure Pods are scheduled to appropriate Nodes based on their resource needs.