为了在生产环境中部署EFK,我们需要准备好相应的资源,如内存、持久化、固定ES节点等。固定节点,localvolume就直接支持了 ;大部分客户那里并没有像ceph rbd这样的存储,一般只有nas,但是nas并不能满足es,在文件系统和性能都不满足,而且es要求使用storageclass,那么从性能的角度来看,使用local-volume是比较合适的了。
Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.
创建local-storage项目
oc new-project local-storage
安装Local Storage operator
Operators → OperatorHub → Local Storage Operator → Click Install → 选择 local-storage namespace →
点击 Subscribe.
查看pod状态
# oc -n local-storage get pods
NAME READY STATUS RESTARTS AGE
local-storage-operator-7cd4799b4b-6bzg4 1/1 Running 0 12h
给3个es节点加一块盘(我这里是sdb 50G,建议200G),然后创建 localvolume.yaml:
通过指定 nodeSelector 选择 es 节点,配置指定硬盘设备和文件系统以及 storageClass。
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:name: "local-disks"namespace: "local-storage"
spec:nodeSelector:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- worker02.ocp44.cluster1.com- worker03.ocp44.cluster1.com- worker04.ocp44.cluster1.comstorageClassDevices:- storageClassName: "local-sc"volumeMode: FilesystemfsType: xfsdevicePaths:- /dev/sdb
创建
oc create -f localvolume.yaml
检查pod
# oc get all -n local-storage
NAME READY STATUS RESTARTS AGE
pod/local-disks-local-diskmaker-7p448 1/1 Running 0 43m
pod/local-disks-local-diskmaker-grkjx 1/1 Running 0 43m
pod/local-disks-local-diskmaker-lmknj 1/1 Running 0 43m
pod/local-disks-local-provisioner-5s9nk 1/1 Running 0 43m
pod/local-disks-local-provisioner-hv42l 1/1 Running 0 43m
pod/local-disks-local-provisioner-tzlkt 1/1 Running 0 43m
pod/local-storage-operator-7cd4799b4b-6bzg4 1/1 Running 0 12hNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/local-storage-operator ClusterIP 172.30.93.34
daemonset.apps/local-disks-local-diskmaker 3 3 3 3 3
daemonset.apps/local-disks-local-provisioner 3 3 3 3 3
deployment.apps/local-storage-operator 1/1 1 1 12hNAME DESIRED CURRENT READY AGE
replicaset.apps/local-storage-operator-7cd4799b4b 1 1 1 12h
查看pv
# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-2337578c 50Gi RWO Delete Available local-sc 4m42s
local-pv-77162aba 50Gi RWO Delete Available local-sc 4m38s
local-pv-cc7b7951 50Gi RWO Delete Available local-sc 4m46s
pv 内容
oc get pv local-pv-2337578c -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:annotations:pv.kubernetes.io/provisioned-by: local-volume-provisioner-worker02.ocp44.cluster1.com-e1f9a639-6872-43d7-b53c-d6255b3d7976creationTimestamp: "2020-05-25T15:29:46Z"finalizers:- kubernetes.io/pv-protectionlabels:storage.openshift.com/local-volume-owner-name: local-disksstorage.openshift.com/local-volume-owner-namespace: local-storagename: local-pv-2337578cresourceVersion: "5661501"selfLink: /api/v1/persistentvolumes/local-pv-2337578cuid: 7f72ebb4-7212-4f0f-9f1a-d0af103ed70e
spec:accessModes:- ReadWriteOncecapacity:storage: 50Gilocal:fsType: xfspath: /mnt/local-storage/local-sc/sdbnodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- worker02.ocp44.cluster1.compersistentVolumeReclaimPolicy: DeletestorageClassName: local-scvolumeMode: Filesystem
status:phase: Available
查看storageclass
# oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-sc kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 11h
查看storageclass内容
# oc get sc local-sc -oyaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:creationTimestamp: "2020-05-25T04:09:31Z"labels:local.storage.openshift.io/owner-name: local-diskslocal.storage.openshift.io/owner-namespace: local-storagename: local-scresourceVersion: "5273371"selfLink: /apis/storage.k8s.io/v1/storageclasses/local-scuid: 0c625dad-3879-43b1-9b0a-f0606de91e5a
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Operators → OperatorHub → Elasticsearch Operator → 点击 Install → Installation Mode 选择 All namespaces → Installed Namespace 选择 openshift-operators-redhat → 选择 Enable operator recommended cluster monitoring on this namespace → 选择一个 Update Channel and Approval Strategy → 点击 Subscribe → 验证 Operators → Installed Operators page → 确认 Elasticsearch Operator 的状态是 Succeeded.
Operators → OperatorHub → Cluster Logging Operators → 点击 Install → Installation Mode 选择 specific namespace on the cluster → Installed Namespace 选择 openshift-logging → 选择 Enable operator recommended cluster monitoring on this namespace → 选择一个 Update Channel and Approval Strategy → Subscribe → 去Installed Operators验证状态 → 去 Workloads → Pods 查看状态
Administration → Custom Resource Definitions → Custom Resource Definitions → ClusterLogging → Custom Resource Definition Overview page → Instances → click Create ClusterLogging,使用以下内容:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:name: "instance"namespace: "openshift-logging"
spec:managementState: "Managed"logStore:type: "elasticsearch"elasticsearch:nodeCount: 3storage:storageClassName: local-scsize: 48Gresources:limits:cpu: "4"memory: "16Gi"requests:cpu: "4"memory: "16Gi"redundancyPolicy: "SingleRedundancy"visualization:type: "kibana"kibana:replicas: 1curation:type: "curator"curator:schedule: "30 3 * * *"collection:logs:type: "fluentd"fluentd: {}
es这里主要配置一下节点数量、sc名称、存储大小、资源配额(内存尽量大些),三节点下,副本模式,除了主分片,一个副本就够了,否则存储会占用很大,看具体情况了。curator这里配置每天3:30做一次清理,默认是清理30天以前的数据,具体可以配置某些索引或者某些项目索引:https://docs.openshift.com/container-platform/4.4/logging/config/cluster-logging-curator.html
EFK可以通过设置 taint/tolerations 或者 nodeSelector来控制节点运行在什么节点,但是我这里通过使用local-volume已经实现了节点绑定,所以就不需要再进行节点绑定了,使用taint/tolerations有个问题得注意,在给node打上taint后,有些infra pod会被驱逐,比如dns pod、machine-config-daemon pod,这些pod是没有tolerations 我们打的taint,但是查了下这些 pod 的operator没有对应 tolerations 的配置,虽然可以通过这些pod的ds直接修改,不会被还原,但是这样的做法还是不标准,有可能出问题。
tolerations
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:name: "instance"namespace: openshift-logging
spec:managementState: "Managed"logStore:type: "elasticsearch"elasticsearch:nodeCount: 1tolerations:- key: "logging"operator: "Exists"effect: "NoExecute"tolerationSeconds: 6000resources:limits:memory: 8Girequests:cpu: 100mmemory: 1Gistorage: {}redundancyPolicy: "ZeroRedundancy"visualization:type: "kibana"kibana:tolerations:- key: "logging"operator: "Exists"effect: "NoExecute"tolerationSeconds: 6000resources:limits:memory: 2Girequests:cpu: 100mmemory: 1Gireplicas: 1curation:type: "curator"curator:tolerations:- key: "logging"operator: "Exists"effect: "NoExecute"tolerationSeconds: 6000resources:limits:memory: 200Mirequests:cpu: 100mmemory: 100Mischedule: "*/5 * * * *"collection:logs:type: "fluentd"fluentd:tolerations:- key: "logging"operator: "Exists"effect: "NoExecute"tolerationSeconds: 6000resources:limits:memory: 2Girequests:cpu: 100mmemory: 1Gi
nodeSelector
apiVersion: logging.openshift.io/v1
kind: ClusterLogging....spec:collection:logs:fluentd:resources: nulltype: fluentdcuration:curator:nodeSelector:node-role.kubernetes.io/infra: ''resources: nullschedule: 30 3 * * *type: curatorlogStore:elasticsearch:nodeCount: 3nodeSelector:node-role.kubernetes.io/infra: ''redundancyPolicy: SingleRedundancyresources:limits:cpu: 500mmemory: 16Girequests:cpu: 500mmemory: 16Gistorage: {}type: elasticsearchmanagementState: Managedvisualization:kibana:nodeSelector:node-role.kubernetes.io/infra: ''proxy:resources: nullreplicas: 1resources: nulltype: kibana....
在固定几个节点给ES用后,这些节点还是有可能会被普通的应用 pod 所使用,所以可以给真正的应用节点打上app标签,然后通过给project 模板注入nodeSelector,这样新建的project就可以使用真正的应用节点,不用在deployment之类的配置nodeSelector了。
如果ES使用的是ceph rbd这样的存储,那么就需要使用nodeSelector或者taint了,否则es会飘。prometheus同理。
https://docs.openshift.com/container-platform/4.4/logging/config/cluster-logging-tolerations.html
https://docs.openshift.com/container-platform/4.4/logging/cluster-logging-moving-nodes.html
https://docs.openshift.com/container-platform/4.4/applications/projects/configuring-project-creation.html
https://docs.openshift.com/container-platform/4.4/networking/configuring-networkpolicy.html#nw-networkpolicy-creating-default-networkpolicy-objects-for-a-new-project
https://access.redhat.com/solutions/4946861