热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

逃脱只会部署集群系列——Kubeadm部署与ETCD操作

一、Kubeadm部署1、基本操作https:segmentfault.coma1190000019465098https:segmentfault.coma11900000194

目录

一、Kubeadm部署


1、基本操作

2、补充

二、ETCD常用操作

1、kubernetes自动补全:

2、拷贝etcdctl命令行工具:

3、etcdctl常用操作:

1 查看etcd集群的成员节点:

2 查看etcd集群节点状态:

3 设置key值:

4 etcd数据快照与恢复

5 etcd生产级别的备份方案



一、Kubeadm部署

1、基本操作

https://segmentfault.com/a/1190000019465098https://segmentfault.com/a/1190000019465098

2、补充

基本操作就不详述了,网上找找很多,其中flannel地址访问不到,我直接拿出来了,以前我一直很疑惑apiserver用v1还是v1brta1,kubectl explain pod或者直接-oyaml解决。

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.2
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: quay.io/coreos/flannel:v0.15.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.15.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

二、ETCD常用操作

1、kubernetes自动补全:

自动补全k8s常用的资源变量,提高效率

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

2、拷贝etcdctl命令行工具:

etcd类似redis命令,两者用法有很多相似之处,都是键值对数据库

$ kubectl -n kube-system exec etcd-k8s-master which etcdctl
$ kubectl -n kube-system cp etcd-k8s-master:/usr/local/bin/etcdctl /usr/bin/etcdctl

3、etcdctl常用操作:


1 查看etcd集群的成员节点:

# 刚开始会出现该报错,需要调整版本至V3
WARNING:
  Environment variable ETCDCTL_API is not set; defaults to etcdctl v2.
  Set environment variable ETCDCTL_API=3 to use v3 API or ETCDCTL_API=2 to use v2 API. 
$ export ETCDCTL_API=3
​# 因为每次etcdctl命令都需要附加证书,直接做个别名
$ alias etcdctl='etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key'

$ etcdctl member list -w table
[[email&#160;protected] ~]# etcdctl member list -w table
+------------------+---------+------------+----------------------------+----------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+------------+----------------------------+----------------------------+
| 49c374033081590d | started | k8s-master | https://192.168.0.121:2380 | https://192.168.0.121:2379 |
+------------------+---------+------------+----------------------------+-----------------

2 查看etcd集群节点状态:

$ etcdctl endpoint status -w table

$ etcdctl endpoint health -w table

3 设置key值:

# 类似redis,手动设置键值对
$ etcdctl put luffy 1
$ etcdctl get luffy

查看所有key值:
$  etcdctl get / --prefix --keys-only
# 我们知道etcd一直处于监听状态,所以集群任何变动都能及时同步到etcd中去,实际上etcd监视着集群所有资源,目录结构为:/registry/资源类型/命名空间/对象ID/,例如监听pod状态,执行 会得到key为目录名,value为pod的yaml内容。
$ etcdctl watch 目录名或者对象名
$ /registry/pods/kube-system/coredns-5644d7b6d9-7gw6t
$ etcdctl get /registry/pods/kube-system/coredns-5644d7b6d9-7gw6t --prefix

查看具体的key对应的数据:
$ etcdctl get /registry/pods/jenkins/sonar-postgres-7fc5d748b6-gtmsb

4 etcd数据快照与恢复

添加定时任务做数据快照
$ etcdctl snapshot save `hostname`-etcd_`date +%Y%m%d%H%M`.db

恢复快照:
停止etcd和apiserver
移走当前数据目录
$ mv /var/lib/etcd/ /tmp
恢复快照
$ etcdctl snapshot restore `hostname`-etcd_`date +%Y%m%d%H%M`.db --data-dir=/var/lib/etcd/


集群恢复
https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md

5 etcd生产级别的备份方案

# 生产级别利用cronjob定时备份
# etcd-db-bak:/var/lib/etcd_backup
# etcd-cert:/etc/etcd/pki
# stcd-bin:pod-name/usr/local/bin/etcd
# firewalld:/usr/lib/firewalld/services/etcd-client.xml
# yaml:/home/install/k8s-self/template/master/k8s-etcd-backup.yaml
# shell:/home/install/k8s-self/scripts/etcd/afterInstall.sh 36-zhu
# 这是定时备份etcd数据的任务
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: k8s-etcd-backup-0
namespace: kube-system
spec:
# timezone is same as controller manager, default is UTC
# 国际标准时间18点换算为北京时间2点
schedule: "12 18 * * *"
concurrencyPolicy: Replace # #并发调度策略:Allow运行同时运行过个任务。Forbid:不运行并发执行。Replace:替换之前的任务
failedJobsHistoryLimit: 2 # 为失败的任务执行保留历史记录数,默认为1.
successfulJobsHistoryLimit: 2 # 为成功执行的任务保留历史记录,默认值为3;所以可以看到6个运行完成的cronjob生成的pod
startingDeadlineSeconds: 3600 # 因为各种原因缺乏执行作业的时间点导致的启动作业错误的超时时长,会被记入错误历史记录;
jobTemplate: # Job控制器模板,用于为CronJob控制器生成Job对象
spec:
template:
metadata:
labels:
app: k8s-etcd-backup
spec:
tolerations: # Taints(污点),Tolerations(容忍)aints定义在Node节点上,声明污点及标准行为,Tolerations定义在Pod,声明可接受得污点。当前容忍度为允许没有污点的master节点执行任务,通过kubectl describe node nodename | greo Tains结果为none
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution: # 硬亲和性:实现的是强制性规则,是Pod调度时必须满足的规则,否则Pod对象的状态会一直是Pending
nodeSelectorTerms: # #nodeSelectorTerms可以定义多条约束,只需满足其中一条。
- matchExpressions: # matchExpressions可以定义多条约束,必须满足全部约束。
- key: kubernetes.io/hostname # 强制绑定到label标签kubernetes.io/hostname的value是k8s-hostname-node1的node上
operator: In # In:label的值在某个列表中
values:
- k8s-hostname-node1 # 每个job和node一对一亲和绑定
containers:
- name: k8s-etcd-backup
image: harborIP/kubernetes/etcd:3.4.3-0
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: "0"
memory: "0"
limits:
cpu: 1000m
memory: 1Gi
env:
- name: ENDPOINTS
value: "https://k8s-node1:2379"
command:
- /bin/sh
- -c
- |
set -ex # -e 脚本中的命令一旦运行失败就终止脚本的执行 -x 用于显示出命令与其执行结果debug模式
rm -rf /data/backup/tmp
mkdir -p /data/backup/tmp && test -d /data/backup/tmp || exit 1; #判断
export backupfilename=`date +"%Y%m%d%H%M%S"`; # 设置环境变量
test -f /certs/ca.pem || (rm -rf /data/backup/tmp && exit 1);test -f /certs/client.pem || (rm -rf /data/backup/tmp && exit 1);test -f /certs/client-key.pem || (rm -rf /data/backup/tmp &&exit 1);\ # 确认是否存在证书文件
ETCDCTL_API=3 /usr/local/bin/etcdctl \
--endpoints=$ENDPOINTS \
--cacert=/certs/ca.pem \
--cert=/certs/client.pem \
--key=/certs/client-key.pem \
--command-timeout=1800s \
snapshot save /data/backup/tmp/etcd-snapshot.db && \ # etcd数据备份命令
cd /data/backup/tmp; tar -czf /data/backup/etcd-snapshot-${backupfilename}.tar.gz * && \
cd -; rm -rf /data/backup/tmp
if [ $? -ne 0 ]; then # 如果运行失败则exit1
exit 1
fi
# delete old file more than 7
count=0;
for file in `ls -t /data/backup/*tar.gz`
do
count=`expr $count + 1`
if [ $count -gt 7 ]; then
rm -rf $file
fi
done
volumeMounts: # 容器目录
- name: master-backup
mountPath: /data/backup
- name: etcd-certs
mountPath: /certs
- name: timezone
mountPath: /etc/localtime
readOnly: true
volumes: # 映射到宿主机的目录
- name: master-backup # 备份文件目录
hostPath:
path: /var/lib/etcd_backup
- name: etcd-certs
hostPath:
path: /etc/etcd/pki # cert文件目录
- name: timezone
hostPath:
path: /etc/localtime # 系统时区文件
restartPolicy: Never # 重启策略,job执行完毕自动退出无需重启
hostNetwork: true
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: k8s-etcd-backup-1
namespace: kube-system
spec:
# timezone is same as controller manager, default is UTC
schedule: "12 19 * * *"
concurrencyPolicy: Replace
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
startingDeadlineSeconds: 3600
jobTemplate:
spec:
template:
metadata:
labels:
app: k8s-etcd-backup
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-hostname-master
containers:
- name: k8s-etcd-backup
image: harborIP/kubernetes/etcd:3.4.3-0
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: "0"
memory: "0"
limits:
cpu: 1000m
memory: 1Gi
env:
- name: ENDPOINTS
value: "https://k8s-master:2379"
command:
- /bin/sh
- -c
- |
set -ex
rm -rf /data/backup/tmp
mkdir -p /data/backup/tmp && test -d /data/backup/tmp || exit 1;
export backupfilename=`date +"%Y%m%d%H%M%S"`;
test -f /certs/ca.pem || (rm -rf /data/backup/tmp && exit 1);test -f /certs/client.pem || (rm -rf /data/backup/tmp && exit 1);test -f /certs/client-key.pem || (rm -rf /data/backup/tmp &&exit 1);\
ETCDCTL_API=3 /usr/local/bin/etcdctl \
--endpoints=$ENDPOINTS \
--cacert=/certs/ca.pem \
--cert=/certs/client.pem \
--key=/certs/client-key.pem \
--command-timeout=1800s \
snapshot save /data/backup/tmp/etcd-snapshot.db && \
cd /data/backup/tmp; tar -czf /data/backup/etcd-snapshot-${backupfilename}.tar.gz * && \
cd -; rm -rf /data/backup/tmp
if [ $? -ne 0 ]; then
exit 1
fi
# delete old file more than 7
count=0;
for file in `ls -t /data/backup/*tar.gz`
do
count=`expr $count + 1`
if [ $count -gt 7 ]; then
rm -rf $file
fi
done
volumeMounts:
- name: master-backup
mountPath: /data/backup
- name: etcd-certs
mountPath: /certs
- name: timezone
mountPath: /etc/localtime
readOnly: true
volumes:
- name: master-backup
hostPath:
path: /var/lib/etcd_backup
- name: etcd-certs
hostPath:
path: /etc/etcd/pki
- name: timezone
hostPath:
path: /etc/localtime
restartPolicy: Never
hostNetwork: true
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: k8s-etcd-backup-2
namespace: kube-system
spec:
# timezone is same as controller manager, default is UTC
schedule: "12 20 * * *"
concurrencyPolicy: Replace
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
startingDeadlineSeconds: 3600
jobTemplate:
spec:
template:
metadata:
labels:
app: k8s-etcd-backup
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-hostname-node2
containers:
- name: k8s-etcd-backup
image: harborIP/kubernetes/etcd:3.4.3-0
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: "0"
memory: "0"
limits:
cpu: 1000m
memory: 1Gi
env:
- name: ENDPOINTS
value: "https://k8s-node2:2379"
command:
- /bin/sh
- -c
- |
set -ex
rm -rf /data/backup/tmp
mkdir -p /data/backup/tmp && test -d /data/backup/tmp || exit 1;
export backupfilename=`date +"%Y%m%d%H%M%S"`;
test -f /certs/ca.pem || (rm -rf /data/backup/tmp && exit 1);test -f /certs/client.pem || (rm -rf /data/backup/tmp && exit 1);test -f /certs/client-key.pem || (rm -rf /data/backup/tmp &&exit 1);\
ETCDCTL_API=3 /usr/local/bin/etcdctl \
--endpoints=$ENDPOINTS \
--cacert=/certs/ca.pem \
--cert=/certs/client.pem \
--key=/certs/client-key.pem \
--command-timeout=1800s \
snapshot save /data/backup/tmp/etcd-snapshot.db && \
cd /data/backup/tmp; tar -czf /data/backup/etcd-snapshot-${backupfilename}.tar.gz * && \
cd -; rm -rf /data/backup/tmp
if [ $? -ne 0 ]; then
exit 1
fi
# delete old file more than 7
count=0;
for file in `ls -t /data/backup/*tar.gz`
do
count=`expr $count + 1`
if [ $count -gt 7 ]; then
rm -rf $file
fi
done
volumeMounts:
- name: master-backup
mountPath: /data/backup
- name: etcd-certs
mountPath: /certs
- name: timezone
mountPath: /etc/localtime
readOnly: true
volumes:
- name: master-backup
hostPath:
path: /var/lib/etcd_backup
- name: etcd-certs
hostPath:
path: /etc/etcd/pki
- name: timezone
hostPath:
path: /etc/localtime
restartPolicy: Never
hostNetwork: true


推荐阅读
  • 我创建了一个新的AWSSSO(使用内部IDP作为身份源,因此不使用ActiveDirectory)。我能够登录AWSCLI、AWSGUI,但 ... [详细]
  • 如何将955万数据表的17秒SQL查询优化至300毫秒
    本文详细介绍了通过优化SQL查询策略,成功将一张包含955万条记录的财务流水表的查询时间从17秒缩短至300毫秒的方法。文章不仅提供了具体的SQL优化技巧,还深入探讨了背后的数据库原理。 ... [详细]
  • 本文详细介绍了如何搭建一个高可用的MongoDB集群,包括环境准备、用户配置、目录创建、MongoDB安装、配置文件设置、集群组件部署等步骤。特别关注分片、读写分离及负载均衡的实现。 ... [详细]
  • kubelet配置cni插件_Kubernetes新近kubectl及CNI漏洞修复,Rancher 2.2.1发布
    今天,Kubernetes发布了一系列补丁版本,修复新近发现的两个安全漏洞CVE-2019-1002101(kubectlcp命令安全漏洞)和CVE-2 ... [详细]
  • Beetl是一款先进的Java模板引擎,以其丰富的功能、直观的语法、卓越的性能和易于维护的特点著称。它不仅适用于高响应需求的大型网站,也适合功能复杂的CMS管理系统,提供了一种全新的模板开发体验。 ... [详细]
  • spring boot使用jetty无法启动 ... [详细]
  • 从理想主义者的内心深处萌发的技术信仰,推动了云原生技术在全球范围内的快速发展。本文将带你深入了解阿里巴巴在开源领域的贡献与成就。 ... [详细]
  • 深入理解Java SE 8新特性:Lambda表达式与函数式编程
    本文作为‘Java SE 8新特性概览’系列的一部分,将详细探讨Lambda表达式。通过多种示例,我们将展示Lambda表达式的不同应用场景,并解释编译器如何处理这些表达式。 ... [详细]
  • 本文详细介绍了`android.os.Binder.getCallingPid()`方法的功能和应用场景,并提供了多个实际的代码示例。通过这些示例,开发者可以更好地理解如何在不同的开发场景中使用该方法。 ... [详细]
  • 在尝试启动Java应用服务器Tomcat时,遇到了org.apache.catalina.LifecycleException异常。本文详细记录了异常的具体表现形式,并提供了有效的解决方案。 ... [详细]
  • 在Kubernetes上部署JupyterHub的步骤和实验依赖
    本文介绍了在Kubernetes上部署JupyterHub的步骤和实验所需的依赖,包括安装Docker和K8s,使用kubeadm进行安装,以及更新下载的镜像等。 ... [详细]
  • 云原生边缘计算之KubeEdge简介及功能特点
    本文介绍了云原生边缘计算中的KubeEdge系统,该系统是一个开源系统,用于将容器化应用程序编排功能扩展到Edge的主机。它基于Kubernetes构建,并为网络应用程序提供基础架构支持。同时,KubeEdge具有离线模式、基于Kubernetes的节点、群集、应用程序和设备管理、资源优化等特点。此外,KubeEdge还支持跨平台工作,在私有、公共和混合云中都可以运行。同时,KubeEdge还提供数据管理和数据分析管道引擎的支持。最后,本文还介绍了KubeEdge系统生成证书的方法。 ... [详细]
  • k8snamespace配置cpu最大和最小限额
    世界上并没有完美的程序,但是我们并不因此而沮丧,因为写程序就是一个不断追求完美的过程。问:如何为namespace配置最大和最小限额&#x ... [详细]
  • k8shelm官网:https:helm.sh点击charts:https:artifacthub.iopackagessearch?sortrelevance&page11.1h ... [详细]
  • 用Kubeadm安装K8s后,kubeflannelds一直CrashLoopBackOff
    2019独角兽企业重金招聘Python工程师标准如果使用Kubeadm安装K8s集群,在安装flannel网络插件后,发现pod:kube-fla ... [详细]
author-avatar
我系懒懒懒猫
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有