本次升级一主两从kubernetes集群。
0. ENV
0.1 升级环境
CentOS 7.X;
kubernetes v1.23.4 -> kubernetes v1.23.5。
0.2 升级说明
集群共3台主机,其中master 1台,work 2台,本次升级一主多从作为参考。
master node * 1:k8s3-master
work node * 2:k8s3-node01、k8s3-node02
1. 升级kubeadm集群说明
本页介绍如何将 kubeadm 创建的 Kubernetes 集群从 1.22.x 版本 升级到 1.23.x 版本以及从 1.23.x 升级到 1.23.y(其中 y > x)。略过次版本号的升级是 不被支持的。
要查看 kubeadm 创建的有关旧版本集群升级的信息,请参考以下页面:
将 kubeadm 集群从 1.21 升级到 1.22
将 kubeadm 集群从 1.20 升级到 1.21
将 kubeadm 集群从 1.19 升级到 1.20
将 kubeadm 集群从 1.18 升级到 1.19
1.1 升级工作的基本流程如下:
升级主控制平面节点(primary control plane node)
升级其他控制平面节点(additional control plane nodes)
升级工作节点(worker nodes)
1.2 准备开始
务必仔细认真阅读发行说明。
集群应使用静态的控制平面和 etcd Pod 或者外部 etcd。
务必备份所有重要组件,例如存储在数据库中应用层面的状态。kubeadm upgrade 不会影响你的工作负载,只会涉及 Kubernetes 内部的组件,但备份终究是好的。
必须禁用交换分区。
1.3 附加信息
在对 kubelet 作次版本升版时需要腾空节点。对于控制面节点,其上可能运行着 CoreDNS Pods 或者其它非常重要的负载。
升级后,因为容器规约的哈希值已更改,所有容器都会被重新启动。
1.4 确定要升级到哪个版本
[root@k8s3-master ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes | tail -5
kubeadm.x86_64 1.23.1-0 kubernetes
kubeadm.x86_64 1.23.2-0 kubernetes
kubeadm.x86_64 1.23.3-0 kubernetes
kubeadm.x86_64 1.23.4-0 kubernetes
kubeadm.x86_64 1.23.5-0 kubernetes
# 在列表中查找最新的 1.23 版本
# 它看起来应该是 1.23.x-0,其中 x 是最新的补丁版本,本次最新版本1.23.5-0。
2. 升级控制平面节点
控制面节点上的升级过程应该每次处理一个节点。首先选择一个要先行升级的控制面节点。该节点上必须拥有/etc/kubernetes/admin.conf 文件。
2.1 升级第一个控制面节点-master
1) 升级前版本检查
kubeadm当前版本为v1.23.4
[root@k8s3-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:36:57Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"}
2) 升级 kubeadm
# 用最新的补丁版本号替换 1.23.x-0 中的 x,如:1.23.5-0
yum install -y kubeadm-1.23.x-0 --disableexcludes=kubernetes
如:
[root@k8s3-master ~]# yum install -y kubeadm-1.23.5-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.cn99.com
* extras: mirrors.163.com
* updates: mirrors.163.com
docker-ce-stable | 3.5 kB 00:00:00
extras | 2.9 kB 00:00:00
kubernetes | 1.4 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/2): docker-ce-stable/7/x86_64/primary_db | 75 kB 00:00:00
(2/2): updates/7/x86_64/primary_db | 14 MB 00:00:01
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.23.4-0 will be updated
---> Package kubeadm.x86_64 0:1.23.5-0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================
Updating:
kubeadm x86_64 1.23.5-0 kubernetes 9.0 M
Transaction Summary
=======================================================================================================================================================================
Upgrade 1 Package
Total download size: 9.0 M
Downloading packages:
Delta RPMs disabled because usr/bin/applydeltarpm not installed.
ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm | 9.0 MB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : kubeadm-1.23.5-0.x86_64 1/2
Cleanup : kubeadm-1.23.4-0.x86_64 2/2
Verifying : kubeadm-1.23.5-0.x86_64 1/2
Verifying : kubeadm-1.23.4-0.x86_64 2/2
Updated:
kubeadm.x86_64 0:1.23.5-0
Complete!
3) 验证下载操作正常,并且 kubeadm 版本正确
[root@k8s3-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:57:37Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
4) 验证升级计划
[root@k8s3-master ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.23.4
[upgrade/versions] kubeadm version: v1.23.5
[upgrade/versions] Target version: v1.23.5
[upgrade/versions] Latest version in the v1.23 series: v1.23.5
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.23.4 v1.23.5
Upgrade to the latest version in the v1.23 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.23.4 v1.23.5
kube-controller-manager v1.23.4 v1.23.5
kube-scheduler v1.23.4 v1.23.5
kube-proxy v1.23.4 v1.23.5
CoreDNS v1.8.6 v1.8.6
etcd 3.5.1-0 3.5.1-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.23.5
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
此命令检查你的集群是否可被升级,并取回你要升级的目标版本。命令也会显示一个包含组件配置版本状态的表格。
说明:
kubeadm upgrade 也会自动对 kubeadm 在节点上所管理的证书执行续约操作。如果需要略过证书续约操作,可以使用标志 --certificate-renewal=false。更多的信息,可参阅证书管理指南。
如果 kubeadm upgrade plan 给出任何需要手动升级的组件配置,用户必须通过 --config 命令行标志向 kubeadm upgrade apply 命令提供替代的配置文件。如果不这样做,kubeadm upgrade apply 会出错并退出,不再执行升级操作。
5) 选择要升级到的目标版本,运行合适的命令
# 将 x 替换为你为此次升级所选择的补丁版本号
sudo kubeadm upgrade apply v1.23.x
如:
[root@k8s3-master ~]# kubeadm upgrade apply v1.23.5
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.23.5"
[upgrade/versions] Cluster version: v1.23.4
[upgrade/versions] kubeadm version: v1.23.5
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y #按提示输入y进行升级
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.23.5"...
Static pod: kube-apiserver-k8s3-master hash: 60d6f7f08025ad69a13175257bd12a3c
Static pod: kube-controller-manager-k8s3-master hash: c2c9eb05f770971ecadf4a3d8c492c76
Static pod: kube-scheduler-k8s3-master hash: b44520e65d70fc9f7561987697e23dd1
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s3-master hash: 7ed44305c7f8ede6716c56ac3bb36a31
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1089949560"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-03-28-13-17-14/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s3-master hash: 60d6f7f08025ad69a13175257bd12a3c
Static pod: kube-apiserver-k8s3-master hash: 60d6f7f08025ad69a13175257bd12a3c
Static pod: kube-apiserver-k8s3-master hash: 60d6f7f08025ad69a13175257bd12a3c
Static pod: kube-apiserver-k8s3-master hash: 8ed2c9c15af224292afbcc55990e788d
[apiclient] Found 1 Pods for label selector compOnent=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-03-28-13-17-14/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s3-master hash: c2c9eb05f770971ecadf4a3d8c492c76
...
[apiclient] Found 1 Pods for label selector compOnent=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-03-28-13-17-14/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s3-master hash: b44520e65d70fc9f7561987697e23dd1
...
[apiclient] Found 1 Pods for label selector compOnent=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
#一旦该命令结束,你应该会看到如下两行:
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.23.5". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
6) 手动升级你的 CNI 驱动插件
你的容器网络接口(CNI)驱动应该提供了程序自身的升级说明。参阅插件页面查找你的 CNI 驱动, 并查看是否需要其他升级步骤。
如果 CNI 驱动作为 DaemonSet 运行,则在其他控制平面节点上不需要此步骤。
2.2 升级其它控制面节点(略)
当前环境部署了1个master,2个 work,故不存在其它控制面节点,此步骤略过。当存在多个master时,该步骤需要继续执行。
对于其它控制面节点,与第一个控制面节点相同,但是使用:
sudo kubeadm upgrade node
而不是:
sudo kubeadm upgrade apply
此外,不需要执行 kubeadm upgrade plan 和更新 CNI 驱动插件的操作。
2.3 腾空节点(略)
当前环境部署了1个master,2个 work,故不存在其它控制面节点,此步骤略过。
通过将节点标记为不可调度并腾空节点为节点作升级准备:
# 将
kubectl drain <node-to-drain> --ignore-daemonsets
2.4 升级kubelet和kubectl
1) master节点升级最新版安装kubelet、kubectl
# 用最新的补丁版本号替换 1.23.x-00 中的 x
yum install -y kubelet-1.23.5-0 kubectl-1.23.5-0 --disableexcludes=kubernetes
2) 重启kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
3) 查看master升级后版本
[root@k8s3-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s3-master Ready control-plane,master 20d v1.23.5
k8s3-node1 Ready <none> 20d v1.23.4
k8s3-node2 Ready <none> 20d v1.23.4
2.5 解除节点的保护(略)
当前环境部署了1个master,2个 work,故不存在其它控制面节点,此步骤略过。
通过将节点标记为可调度,让其重新上线:
# 将
kubectl uncordon <node-to-drain>
3. 升级工作节点
工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点,以不影响运行工作负载所需的最小容量。本次采用一次执行一个工作节点方式进行。
3.1 升级 kubeadm
# 用最新的补丁版本替换 1.23.x-00 中的 x
yum install -y kubeadm-1.23.x-0 --disableexcludes=kubernetes
如:
[root@k8s3-node1 ~]# yum install -y kubeadm-1.23.5-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.cn99.com
* extras: mirrors.cn99.com
* updates: mirrors.cn99.com
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
extras | 2.9 kB 00:00:00
kubernetes | 1.4 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/2): docker-ce-stable/7/x86_64/primary_db | 75 kB 00:00:00
(2/2): updates/7/x86_64/primary_db | 14 MB 00:00:02
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.23.4-0 will be updated
---> Package kubeadm.x86_64 0:1.23.5-0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================
Updating:
kubeadm x86_64 1.23.5-0 kubernetes 9.0 M
Transaction Summary
=======================================================================================================================================================================
Upgrade 1 Package
Total download size: 9.0 M
Downloading packages:
Delta RPMs disabled because usr/bin/applydeltarpm not installed.
ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm | 9.0 MB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : kubeadm-1.23.5-0.x86_64 1/2
Cleanup : kubeadm-1.23.4-0.x86_64 2/2
Verifying : kubeadm-1.23.5-0.x86_64 1/2
Verifying : kubeadm-1.23.4-0.x86_64 2/2
Updated:
kubeadm.x86_64 0:1.23.5-0
Complete!
3.2 执行 "kubeadm upgrade"
对于工作节点,下面的命令会升级本地的 kubelet 配置:
[root@k8s3-node1 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
3.3 腾空节点
将节点标记为不可调度并驱逐所有负载,准备节点的维护:
# 将
kubectl drain <node-to-drain> --ignore-daemonsets
如:
[root@k8s3-master ~]# kubectl drain k8s3-node2 --ignore-daemonsets
node/k8s3-node2 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-c8jg7, kube-system/kube-proxy-nkqjm
evicting pod dev/nginx-6fdfb7f689-kxcnp
evicting pod default/myweb-5864d87b67-pt4jp
evicting pod dev/nginx-6fdfb7f689-9knx2
evicting pod default/mysql-9b877f47-6jb9r
pod/nginx-6fdfb7f689-kxcnp evicted
pod/myweb-5864d87b67-pt4jp evicted
pod/nginx-6fdfb7f689-9knx2 evicted
pod/mysql-9b877f47-6jb9r evicted
node/k8s3-node2 drained
3.4 升级 kubelet 和 kubectl
1) 升级kubelet和kubectl安装包
# 将 1.23.x-0 x 替换为最新的补丁版本
yum install -y kubelet-1.23.x-0 kubectl-1.23.x-0 --disableexcludes=kubernetes
如:
[root@k8s3-node1 ~]# yum install -y kubelet-1.23.5-0 kubectl-1.23.5-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.cn99.com
* updates: mirrors.cn99.com
Resolving Dependencies
--> Running transaction check
---> Package kubectl.x86_64 0:1.23.4-0 will be updated
---> Package kubectl.x86_64 0:1.23.5-0 will be an update
---> Package kubelet.x86_64 0:1.23.4-0 will be updated
---> Package kubelet.x86_64 0:1.23.5-0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================
Updating:
kubectl x86_64 1.23.5-0 kubernetes 9.5 M
kubelet x86_64 1.23.5-0 kubernetes 21 M
Transaction Summary
=======================================================================================================================================================================
Upgrade 2 Packages
Total download size: 30 M
Downloading packages:
Delta RPMs disabled because usr/bin/applydeltarpm not installed.
(1/2): 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm | 9.5 MB 00:00:01
(2/2): d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm | 21 MB 00:00:02
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 12 MB/s | 30 MB 00:00:02
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : kubectl-1.23.5-0.x86_64 1/4
Updating : kubelet-1.23.5-0.x86_64 2/4
Cleanup : kubectl-1.23.4-0.x86_64 3/4
Cleanup : kubelet-1.23.4-0.x86_64 4/4
Verifying : kubelet-1.23.5-0.x86_64 1/4
Verifying : kubectl-1.23.5-0.x86_64 2/4
Verifying : kubelet-1.23.4-0.x86_64 3/4
Verifying : kubectl-1.23.4-0.x86_64 4/4
Updated:
kubectl.x86_64 0:1.23.5-0 kubelet.x86_64 0:1.23.5-0
Complete!
2) 重启 kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
3.5 取消对节点的保护
通过将节点标记为可调度,让节点重新上线:
# 将
kubectl uncordon <node-to-drain>
如:
[root@k8s3-master ~]# kubectl uncordon k8s3-node1
node/k8s3-node1 uncordoned
4. 验证集群的状态
在所有节点上升级 kubelet 后,通过从 kubectl 可以访问集群的任何位置运行以下命令, 验证所有节点是否再次可用:
[root@k8s3-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s3-master Ready control-plane,master 20d v1.23.5
k8s3-node1 Ready <none> 20d v1.23.5
k8s3-node2 Ready <none> 20d v1.23.5
STATUS应显示所有节点为 Ready 状态,并且版本号已经被更新为v1.23.5。
5. 从故障状态恢复
如果 kubeadm upgrade 失败并且没有回滚,例如由于执行期间节点意外关闭, 你可以再次运行 kubeadm upgrade。此命令是幂等的,并最终确保实际状态是你声明的期望状态。要从故障状态恢复,你还可以运行 kubeadm upgrade --force 而无需更改集群正在运行的版本。
在升级期间,kubeadm 向 etc/kubernetes/tmp 目录下的如下备份文件夹写入数据:
kubeadm-backup-etcd-<date>-<time>
kubeadm-backup-manifests-<date>-<time>
kubeadm-backup-etcd 包含当前控制面节点本地 etcd 成员数据的备份。如果 etcd 升级失败并且自动回滚也无法修复,则可以将此文件夹中的内容复制到 var/lib/etcd 进行手工修复。如果使用的是外部的 etcd,则此备份文件夹为空。
kubeadm-backup-manifests 包含当前控制面节点的静态 Pod 清单文件的备份版本。如果升级失败并且无法自动回滚,则此文件夹中的内容可以复制到 etc/kubernetes/manifests 目录实现手工恢复。如果由于某些原因,在升级前后某个组件的清单未发生变化,则 kubeadm 也不会为之 生成备份版本。
6. 工作原理
6.1 kubeadm upgrade apply做了以下工作
检查你的集群是否处于可升级状态:
API 服务器是可访问的
所有节点处于 Ready 状态
控制面是健康的
强制执行版本偏差策略。
确保控制面的镜像是可用的或可拉取到服务器上。
如果组件配置要求版本升级,则生成替代配置与/或使用用户提供的覆盖版本配置。
升级控制面组件或回滚(如果其中任何一个组件无法启动)。
应用新的 CoreDNS 和 kube-proxy 清单,并强制创建所有必需的 RBAC 规则。
如果旧文件在 180 天后过期,将创建 API 服务器的新证书和密钥文件并备份旧文件。
6.2 kubeadm upgrade node 在其他控制平节点上执行以下操作
从集群中获取 kubeadm ClusterConfiguration。
(可选操作)备份 kube-apiserver 证书。
升级控制平面组件的静态 Pod 清单。
为本节点升级 kubelet 配置
6.3 kubeadm upgrade node 在工作节点上完成以下工作
从集群取回 kubeadm ClusterConfiguration。
为本节点升级 kubelet 配置。
7. 官方参考
https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/#k8s-install-versions-1
旨在交流,不足之处,还望抛砖。
作者:王坤,微信公众号:rundba,欢迎转载,转载请注明出处。
如需公众号转发,请联系wx: landnow。
往期推荐
0156.K kubeadm安装高可用K8S集群(2/2)
0155.K kubeadm安装高可用K8S集群(1/2)
0154.K master初始化后_kube-proxy状态一直为CrashLoopBackOff处理记录
云原生DevOps,研发一体化的践行利器
0153.C kudu多主磁盘报错处理处理_Illegal state mismatched UUIDs
0152.K 在K8S中安装/升级/卸载 Kuboard v3
0151.K 升级kuboard(内建用户库方式)
0150.K 安装kuboard(内建用户库方式)
0147.k kubernetes 3节点实验环境安装
0146. 主机名带下划线报错_could not convert cfg to an internal cfg ...