PrometheusOperator数据持久化
上面我们在修改完权限的时候,重启了 Prometheus 的 Pod,如果我们仔细观察的话会发现我们之前采集的数据已经没有了,这是因为我们通过 prometheus 这个 CRD 创建的 Prometheus 并没有做数据的持久化,我们可以直接查看生成的 Prometheus Pod 的挂载情况就清楚了:
kubectl get pod prometheus-k8s-0 -n monitoring -o yaml
......
volumeMounts:
- mountPath: /etc/prometheus/config_out
name: config-out
readOnly: true
- mountPath: /prometheus name: prometheus-k8s-db ...... volumes: ...... - emptyDir: {} name: prometheus-k8s-db ......
可以看到 Prometheus 的数据目录 /prometheus 实际上是通过 emptyDir 进行挂载的,我们知道 emptyDir 挂载的数据的生命周期和 Pod 生命周期一致的,所以如果 Pod 挂掉了,数据也就丢失了,这也就是为什么我们重建 Pod 后之前的数据就没有了的原因,对应线上的监控数据肯定需要做数据的持久化的,同样的 prometheus 这个 CRD 资源也为我们提供了数据持久化的配置方法,由于我们的 Prometheus 最终是通过 Statefulset 控制器进行部署的,所以我们这里需要通过 storageclass 来做数据持久化,首先创建一个 StorageClass 对象:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prometheus-data-db
provisioner: fuseim.pri/ifs
这里我们声明一个 StorageClass 对象,其中 provisioner=fuseim.pri/ifs,则是因为我们集群中使用的是 nfs 作为存储后端,而前面我们课程中创建的 nfs-client-provisioner 中指定的 PROVISIONER_NAME 就为 fuseim.pri/ifs,这个名字不能随便更改,将该文件保存为 prometheus-storageclass.yaml:
[root@k8s-master manifests]# vim prometheus-storageclass.yaml
[root@k8s-master manifests]# kubectl apply -f prometheus-storageclass.yaml
storageclass.storage.k8s.io/prometheus-data-db created
[root@k8s-master manifests]#
然后在prometheus 的 CRD 资源对象中添加如下配置:
storage:
volumeClaimTemplate:
spec:
storageClassName: prometheus-data-db
resources:
requests:
storage: 10Gi
[root@k8s-master manifests]# kubectl get crd -n monitoringNAME CREATED AT
alertmanagers.monitoring.coreos.com 2019-10-08T08:02:15Zpodmonitors.monitoring.coreos.com 2019-10-08T08:02:15Z prometheuses.monitoring.coreos.com 2019-10-08T08:02:15Zprometheusrules.monitoring.coreos.com 2019-10-08T08:02:15Z servicemonitors.monitoring.coreos.com 2019-10-08T08:02:16Z [root@k8s-master manifests]# [root@k8s-master manifests]# cat prometheus-prometheus.yaml apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: labels: prometheus: k8s name: k8s namespace: monitoring spec: alerting: alertmanagers: - name: alertmanager-main namespace: monitoring port: web storage: volumeClaimTemplate: spec: storageClassName: prometheus-data-db resources: requests: storage: 10Gi baseImage: quay.io/prometheus/prometheus nodeSelector: beta.kubernetes.io/os: linux replicas: 2 secrets: - etcd-certs additionalScrapeConfigs: name: additional-configs key: prometheus-additional.yaml resources: requests: memory: 400Mi ruleSelector: matchLabels: prometheus: k8s role: alert-rules securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 serviceAccountName: prometheus-k8s serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} version: v2.11.0
[root@k8s-master manifests]# kubectl apply -f prometheus-prometheus.yaml
prometheus.monitoring.coreos.com/k8s unchanged
[root@k8s-master manifests]#
[root@k8s-master manifests]# kubectl get pv -n monitoring | grep prometheus-k8s-dbmonitoring-prometheus-k8s-db-prometheus-k8s-0-pvc-f318725c-a645-40a6-ba9f-01c274c0e603 10Gi RWO Delete B ound monitoring/prometheus-k8s-db-prometheus-k8s-0 prometheus-data-db 36smonitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-e6824b03-0bc9-4ad3-84e3-ec143002d0e4 10Gi RWO Delete B ound monitoring/prometheus-k8s-db-prometheus-k8s-1 prometheus-data-db 36s[root@k8s-master manifests]# kubectl get pvc -n monitoring NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE prometheus-k8s-db-prometheus-k8s-0 Bound monitoring-prometheus-k8s-db-prometheus-k8s-0-pvc-f318725c-a645-40a6-ba9f-01c274c0e603 10Gi RWO prometheus-data-db 42s prometheus-k8s-db-prometheus-k8s-1 Bound monitoring-prometheus-k8s-db-prometheus-k8s-1-pvc-e6824b03-0bc9-4ad3-84e3-ec143002d0e4 10Gi RWO prometheus-data-db 42s [root@k8s-master manifests]#
现在我们再去看 Prometheus Pod 的数据目录就可以看到是关联到一个 PVC 对象上了。
kubectl get pod prometheus-k8s-0 -n monitoring -o yaml
......
volumeMounts:
- mountPath: /etc/prometheus/config_out
name: config-out
readOnly: true
- mountPath: /prometheus name: prometheus-k8s-db ...... volumes: ...... - name: prometheus-k8s-db persistentVolumeClaim: claimName: prometheus-k8s-db-prometheus-k8s-0 ......
现在即使我们的 Pod 挂掉了,数据也不会丢失了
版权声明:本文不是「本站」原创文章,版权归原作者所有 | 原文地址: