Kubernetes Prometheus Operator 部署方案(PodMonitor方式)
架构升级对比
在开始之前,我们对比一下升级前后的"超市管理模式"变化:
第一阶段:环境准备
确认环境,确保 kubectl 能连接集群:
# 检查节点状态
kubectl get nodes
# 创建命名空间
kubectl create namespace monitoring第二阶段:构建基础设施(RBAC & CRDs)
2.1 检查 CRD 是否已安装
Prometheus Operator 依赖 Custom Resource Definitions (CRDs),这是 Operator 能够识别 PodMonitor 等新资源的前提。
# 检查核心的 Prometheus Operator CRD
kubectl get crd prometheuses.monitoring.coreos.com
kubectl get crd podmonitors.monitoring.coreos.com
kubectl get crd servicemonitors.monitoring.coreos.com
kubectl get crd alertmanagers.monitoring.coreos.com成功输出示例:
NAME CREATED AT
podmonitors.monitoring.coreos.com 2026-02-04T03:07:13Z
NAME CREATED AT
prometheuses.monitoring.coreos.com 2026-02-04T05:41:33Z
NAME CREATED AT
servicemonitors.monitoring.coreos.com 2026-02-04T03:07:19Z
NAME CREATED AT
alertmanagers.monitoring.coreos.com 2026-02-04T05:41:28Z如果命令返回了资源信息,说明已安装;如果提示 No resources found 或 error: the server doesn't have a resource type...,则说明未安装(一般情况下,买来的集群内已经安装好了,因为集群默认已装了官方的 Prometheus 组件)。
2.2 安装 CRDs(如未安装)
获取官方 Manifests:
# 创建工作目录
mkdir -p ~/prom-op && cd ~/prom-op
# 克隆官方仓库(指定稳定版本 v0.13.0)
git clone --depth 1 --branch v0.13.0 https://github.com/prometheus-operator/kube-prometheus.git
cd kube-prometheus/manifests如无法访问 GitHub,可提前下载好相关代码。
安装 CRDs 和基础权限(注意顺序:必须先安装 CRD,否则后续资源会报错):
# 1. 创建 Namespace(与之前创建的相同,但多了适配标签,建议用这个)
kubectl create -f namespace.yaml
# 2. 安装所有 CRDs(PodMonitor, ServiceMonitor, Alertmanager 等定义)
kubectl create -f setup/
# 等待几秒让 CRD 生效
sleep 5安装 RBAC 和其他基础组件:
# 安装 RBAC 和其他基础组件(ServiceAccount, ClusterRole 等)
kubectl create -f .重要:
/manifests下不但包含了 prometheus-operator 相关组件,还包含了下文所有需要启动的 yaml 文件。如果能正确启动,实际上已经满足了企业级的配置要求,下面所有步骤都可以省略。
2.3 创建 RBAC(如未在 manifests 中安装)
创建文件 01-rbac.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus-sa
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-role
rules:
- apiGroups: [""]
resources: ["nodes", "nodes/proxy", "services", "endpoints", "pods", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions", "networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-role
subjects:
- kind: ServiceAccount
name: prometheus-sa
namespace: monitoring执行:
kubectl apply -f 01-rbac.yaml第三阶段:部署 Prometheus 实例(替换原 Deployment)
不再编写 Deployment,我们将创建一个 Prometheus CRD 对象,告诉 Operator:"我要一个这样的 Prometheus"。
创建文件 03-prometheus-instance.yaml:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: k8s
namespace: monitoring
labels:
prometheus: k8s
spec:
# 1. 关联 ServiceAccount
serviceAccountName: prometheus-sa
# 2. 监控范围配置(关键!)
# 告诉 Operator:自动加载当前 namespace 下所有的 PodMonitor 和 ServiceMonitor
podMonitorSelector: {}
serviceMonitorSelector: {}
ruleSelector: {}
# 3. 版本与镜像
version: v2.55.1
image: quay.io/prometheus/prometheus:v2.55.1
# 4. 资源限制
resources:
requests:
memory: 400Mi
cpu: 200m
limits:
memory: 2Gi
cpu: 1000m
# 5. 存储配置
storage:
volumeClaimTemplate:
spec:
storageClassName: nfs-storage # 【重要】替换为你实际的 StorageClass 名称
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
# 6. 保留策略
retention: 15d
# 7. 选择器配置(允许监控所有 namespace)
# 如果不配这个,默认只监控 monitoring 命名空间
podMonitorNamespaceSelector: {}
serviceMonitorNamespaceSelector: {}执行部署:
kubectl apply -f 03-prometheus-instance.yaml验证:
# 观察 StatefulSet 是否创建并运行
kubectl get statefulset -n monitoring
kubectl get pods -n monitoring -w
# 应该看到 prometheus-k8s-0 状态变为 Running第四阶段:部署 Node Exporter
与原生方案相同,创建文件 04-node-exporter.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: monitoring
labels:
app: node-exporter
spec:
selector:
matchLabels:
app: node-exporter
template:
metadata:
labels:
app: node-exporter
spec:
hostNetwork: true
hostPID: true
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: node-exporter
image: prom/node-exporter:v1.8.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9100
hostPort: 9100
args:
- "--path.procfs=/host/proc"
- "--path.sysfs=/host/sys"
- "--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($|/)"
volumeMounts:
- name: proc
mountPath: /host/proc
readOnly: true
- name: sys
mountPath: /host/sys
readOnly: true
resources:
requests:
memory: "50Mi"
cpu: "100m"
limits:
memory: "100Mi"
cpu: "200m"
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys执行:
kubectl apply -f 04-node-exporter.yaml第五阶段:部署 Grafana
与原生方案基本相同,创建文件 05-grafana.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-data
namespace: monitoring
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: monitoring
data:
datasources.yaml: |
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-k8s.monitoring.svc:9090
isDefault: true
access: proxy
editable: true
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:10.4.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
env:
- name: GF_SECURITY_ADMIN_PASSWORD
value: "admin123"
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
- name: datasources
mountPath: /etc/grafana/provisioning/datasources
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "1Gi"
cpu: "1000m"
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-data
- name: datasources
configMap:
name: grafana-datasources
---
apiVersion: v1
kind: Service
metadata:
name: grafana-service
namespace: monitoring
spec:
selector:
app: grafana
ports:
- port: 3000
targetPort: 3000
type: NodePort执行:
kubectl apply -f 05-grafana.yaml手动配置数据源(如自动配置未生效)
访问
http://<节点IP>:<Grafana端口>登录(用户:
admin,密码:admin123)点击 Connections → Data Sources → Add data source → 选择 Prometheus
在 URL 栏填入:
http://prometheus-k8s.monitoring.svc:9090(注意服务名变为prometheus-k8s,而非prometheus-service)点击 Save & Test
导入仪表盘
点击 Dashboards → New → Import
输入模板 ID,如 1860(Node Exporter Full)或 315(Kubernetes Cluster)
点击 Load,选择 Prometheus 数据源,点击 Import
第六阶段:业务监控新模式(PodMonitor)
不再修改全局 ConfigMap,不再给 Pod 打通用的 monitor=true 标签。现在,每个业务拥有自己的监控配置文件 PodMonitor。
场景:监控 ezcloud-mqtt-auth 服务
假设该服务部署在 ucs-os 命名空间,端口名为 prot-8800,路径为 /actuator/prometheus。
步骤 1:编写 PodMonitor
创建文件 ezcloud-mqtt-auth-monitor.yaml(可以放在业务代码仓库中,随业务一起发布):
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: ezcloud-mqtt-auth-monitor
namespace: monitoring # 监控规则本身存放在 monitoring 命名空间
labels:
release: prometheus # 可选,用于筛选
spec:
# 1. 指定去哪个命名空间找 Pod
namespaceSelector:
matchNames:
- ucs-os
# 2. 指定找哪些 Pod(通过 Label 匹配)
selector:
matchLabels:
aws-app: ezcloud-mqtt-auth
# 不需要再打 monitor=true,这里的匹配更精准
# 3. 定义怎么抓
podMetricsEndpoints:
- port: prot-8800 # 【关键】匹配容器端口 NAME,不是数字!
path: /actuator/prometheus
interval: 15s步骤 2:应用配置
kubectl apply -f ezcloud-mqtt-auth-monitor.yaml步骤 3:自动生效
无需重启 Prometheus
无需修改任何 ConfigMap
Operator 会在几秒钟内检测到新的 PodMonitor,自动将其转换为 Prometheus 的配置并重载
验证:
访问 Prometheus UI → Status → Targets,应该能看到一个新的 Target 组 monitoring/ezcloud-mqtt-auth-monitor/0,状态为 UP。