K8S之adm集群证书过期和集群升级一并解决

Stella981
• 阅读 731

作者:李毓

k8s的adm安装方式有一个巨坑,就是证书过期问题。其中涉及到的证书有apiserver,kubelet,etcd,proxy等等证书。这个问题在二进制安装方式是不存在的,因为可以手动更改证书。但是由于adm是自动安装,所以需要后期处理。
目前的解决方式一般有三种,第一种是集群升级,通过升级k8s,间接的把证书也升级了。第二种是修改源代码,也就是对kubeadm重新编译。第三种就是重新生成证书。
由于k8s1.18.8版本刚刚更新了。所以我想趁着这次机会演绎一下集群升级和证书更新的一石二鸟操作。

约定:

三台机器,一台master,二台server。版本是1.18.6

[root@adm-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
adm-master   Ready    master   37d   v1.18.6
adm-node1    Ready    <none>   37d   v1.18.6
adm-node2    Ready    <none>   37d   v1.18.6

查看一下证书的有效期

[root@adm-master ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '
            Not Before: Aug  1 13:41:05 2020 GMT
            Not After : Aug  1 13:41:05 2021 GMT

我们先来看一下kubeadm的版本

[root@adm-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

可以看见,和集群的版本是相同的。
通过kbueadm upgrade plan 方式查看升级计划

[root@adm-master ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.0
[upgrade/versions] kubeadm version: v1.18.6
I0907 23:36:36.404303   26584 version.go:252] remote version is much newer: v1.19.0; falling back to: stable-1.18
[upgrade/versions] Latest stable version: v1.18.8
[upgrade/versions] Latest stable version: v1.18.8
W0907 23:36:52.285681   26584 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.18.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.18.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0907 23:36:52.285711   26584 version.go:103] falling back to the local client version: v1.18.6
[upgrade/versions] Latest version in the v1.18 series: v1.18.6
[upgrade/versions] Latest version in the v1.18 series: v1.18.6

Upgrade to the latest version in the v1.18 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.18.0   v1.18.6
Controller Manager   v1.18.0   v1.18.6
Scheduler            v1.18.0   v1.18.6
Kube Proxy           v1.18.0   v1.18.6
CoreDNS              1.6.7     1.6.7
Etcd                 3.4.3     3.4.3-0

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.18.6

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     3 x v1.18.6   v1.18.8

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.18.0   v1.18.8
Controller Manager   v1.18.0   v1.18.8
Scheduler            v1.18.0   v1.18.8
Kube Proxy           v1.18.0   v1.18.8
CoreDNS              1.6.7     1.6.7
Etcd                 3.4.3     3.4.3-0

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.18.8

Note: Before you can perform this upgrade, you have to update kubeadm to v1.18.8.

开始升级

[root@adm-master ~]# kubeadm upgrade apply v1.18.8
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.8"
[upgrade/versions] Cluster version: v1.18.0
[upgrade/versions] kubeadm version: v1.18.6
[upgrade/version] FATAL: the --version argument is invalid due to these errors:

    - Specified version to upgrade to "v1.18.8" is higher than the kubeadm version "v1.18.6". Upgrade kubeadm first using the tool you used to install kubeadm

Can be bypassed if you pass the --force flag
To see the stack trace of this error execute with --v=5 or higher
[root@adm-master ~]# kubeadm upgrade apply v1.18.8 --force
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.8"
[upgrade/versions] Cluster version: v1.18.0
[upgrade/versions] kubeadm version: v1.18.6
[upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set: 

    - Specified version to upgrade to "v1.18.8" is higher than the kubeadm version "v1.18.6". Upgrade kubeadm first using the tool you used to install kubeadm
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.8"...
Static pod: kube-apiserver-adm-master hash: 6861528d68248c9d1178280d3594d8db
Static pod: kube-controller-manager-adm-master hash: 871d07b6ec226107d162a636bad7f0aa
Static pod: kube-scheduler-adm-master hash: 35d86d4a27d4ed2186b3ab641e946a02
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.8" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests642601366"
W0907 23:42:33.736477   28017 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-07-23-42-25/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-adm-master hash: 6861528d68248c9d1178280d3594d8db
Static pod: kube-apiserver-adm-master hash: 253cf8d5be40058a076cd11584613b96
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-07-23-42-25/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-adm-master hash: 871d07b6ec226107d162a636bad7f0aa
Static pod: kube-controller-manager-adm-master hash: 628316be9d303769769e096bfd3537e4
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-07-23-42-25/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-adm-master hash: 35d86d4a27d4ed2186b3ab641e946a02
Static pod: kube-scheduler-adm-master hash: 7363eeef53899d60c792412f58124026
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.8". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

查看镜像版本和集群版本

[root@adm-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
adm-master   Ready    master   37d   v1.18.6
adm-node1    Ready    <none>   37d   v1.18.6
adm-node2    Ready    <none>   37d   v1.18.6
[root@adm-master ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.8             0fb7201f92d0        3 weeks ago         117MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.8             6a979351fe5e        3 weeks ago         162MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.8             92d040a0dca7        3 weeks ago         173MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.8             6f7135fb47e0        3 weeks ago         95.3MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        5 months ago        117MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        5 months ago        162MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        5 months ago        173MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        5 months ago        95.3MB
registry.cn-shenzhen.aliyuncs.com/carp/flannel                    v0.11               f60e29a33f27        6 months ago        52.6MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        6 months ago        683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        7 months ago        43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        10 months ago       288MB

发现没有升级,因为kubectl和kubelet也需要升级。

[root@adm-master ~]# yum install -y kubelet-1.18.8-0 kubeadm-1.18.8-0 kubectl-1.18.8-0

systemctl daemon-reload
systemctl restart kubelet


[root@adm-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
adm-master   Ready    master   37d   v1.18.8
adm-node1    Ready    <none>   37d   v1.18.6
adm-node2    Ready    <none>   37d   v1.18.6

maser已经更新了,node还没有更新,去node执行一下.

yum install -y kubelet-1.18.8-0
systemctl daemon-reload
systemctl restart kubelet

再查看一下

[root@adm-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
adm-master   Ready    master   37d   v1.18.8
adm-node1    Ready    <none>   37d   v1.18.8
adm-node2    Ready    <none>   37d   v1.18.8

集群升级成功
顺便查看一下证书日期

[root@adm-master ~]#  openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '
            Not Before: Aug  1 13:41:05 2020 GMT
            Not After : Sep  7 15:42:34 2021 GMT

到期日已经变了。

这样一石二鸟计划就顺利完成咯。

点赞
收藏
评论区
推荐文章
blmius blmius
3年前
MySQL:[Err] 1292 - Incorrect datetime value: ‘0000-00-00 00:00:00‘ for column ‘CREATE_TIME‘ at row 1
文章目录问题用navicat导入数据时,报错:原因这是因为当前的MySQL不支持datetime为0的情况。解决修改sql\mode:sql\mode:SQLMode定义了MySQL应支持的SQL语法、数据校验等,这样可以更容易地在不同的环境中使用MySQL。全局s
皕杰报表之UUID
​在我们用皕杰报表工具设计填报报表时,如何在新增行里自动增加id呢?能新增整数排序id吗?目前可以在新增行里自动增加id,但只能用uuid函数增加UUID编码,不能新增整数排序id。uuid函数说明:获取一个UUID,可以在填报表中用来创建数据ID语法:uuid()或uuid(sep)参数说明:sep布尔值,生成的uuid中是否包含分隔符'',缺省为
待兔 待兔
4个月前
手写Java HashMap源码
HashMap的使用教程HashMap的使用教程HashMap的使用教程HashMap的使用教程HashMap的使用教程22
Jacquelyn38 Jacquelyn38
3年前
2020年前端实用代码段,为你的工作保驾护航
有空的时候,自己总结了几个代码段,在开发中也经常使用,谢谢。1、使用解构获取json数据let jsonData  id: 1,status: "OK",data: 'a', 'b';let  id, status, data: number   jsonData;console.log(id, status, number )
Stella981 Stella981
3年前
KVM调整cpu和内存
一.修改kvm虚拟机的配置1、virsheditcentos7找到“memory”和“vcpu”标签,将<namecentos7</name<uuid2220a6d1a36a4fbb8523e078b3dfe795</uuid
Easter79 Easter79
3年前
Twitter的分布式自增ID算法snowflake (Java版)
概述分布式系统中,有一些需要使用全局唯一ID的场景,这种时候为了防止ID冲突可以使用36位的UUID,但是UUID有一些缺点,首先他相对比较长,另外UUID一般是无序的。有些时候我们希望能使用一种简单一些的ID,并且希望ID能够按照时间有序生成。而twitter的snowflake解决了这种需求,最初Twitter把存储系统从MySQL迁移
Wesley13 Wesley13
3年前
mysql设置时区
mysql设置时区mysql\_query("SETtime\_zone'8:00'")ordie('时区设置失败,请联系管理员!');中国在东8区所以加8方法二:selectcount(user\_id)asdevice,CONVERT\_TZ(FROM\_UNIXTIME(reg\_time),'08:00','0
Wesley13 Wesley13
3年前
00:Java简单了解
浅谈Java之概述Java是SUN(StanfordUniversityNetwork),斯坦福大学网络公司)1995年推出的一门高级编程语言。Java是一种面向Internet的编程语言。随着Java技术在web方面的不断成熟,已经成为Web应用程序的首选开发语言。Java是简单易学,完全面向对象,安全可靠,与平台无关的编程语言。
Wesley13 Wesley13
3年前
MySQL部分从库上面因为大量的临时表tmp_table造成慢查询
背景描述Time:20190124T00:08:14.70572408:00User@Host:@Id:Schema:sentrymetaLast_errno:0Killed:0Query_time:0.315758Lock_
Python进阶者 Python进阶者
10个月前
Excel中这日期老是出来00:00:00,怎么用Pandas把这个去除
大家好,我是皮皮。一、前言前几天在Python白银交流群【上海新年人】问了一个Pandas数据筛选的问题。问题如下:这日期老是出来00:00:00,怎么把这个去除。二、实现过程后来【论草莓如何成为冻干莓】给了一个思路和代码如下:pd.toexcel之前把这