ceph-s 查看,处于warning状态
有一个pg处于如下状态:active+clean+scrubbing+deep+repair
通过ceph health detail查看详细原因,具体哪个pg出现异常。
[root@cvknode01 ~]# ceph health detail
HEALTH_WARN Degraded data redundancy: 1 pg repair
PG_DEGRADED Degraded data redundancy: 1 pg repair
pg 2.81 is active+clean+scrubbing+deep+repair, acting [13,30,6]
通过上面命令确认到的pg id,再用query查询:
ceph pg 2.81query进展如下:
"recovery_state": [
{
"name": "Started/Primary/Active",
"enter_time": "2025-05-14 14:32:52.349278",
"might_have_unfound": [
{
"osd": "6",
"status": "already probed"
},
{
"osd": "7",
"status": "not queried"
},
{
"osd": "30",
"status": "already probed"
}
],
"recovery_progress": {
"backfill_targets": [],
"waiting_on_backfill": [],
"last_backfill_started": "MIN",
"backfill_info": {
"begin": "MIN",
"end": "MIN",
"objects": []
},
"peer_backfill_info": [],
"backfills_in_flight": [],
"recovering": [],
"pg_backend": {
"pull_from_peer": [],
"pushing": []
}
},
"scrub": {
"scrubber.epoch_start": "4346",
"scrubber.active": true,
"scrubber.state": "NEW_CHUNK",
"scrubber.start": "2:8115ce2a:::rbd_data.1.2778318e1e8a0.000000000002b3a8:0",
"scrubber.end": "2:8115ce2a:::rbd_data.1.2778318e1e8a0.000000000002b3a8:0",
"scrubber.subset_last_update": "0'0",
"scrubber.deep": true,
"scrubber.seed": 4294967295,
"scrubber.waiting_on": 0,
"scrubber.waiting_on_whom": []
}
},
{
"name": "Started",
"enter_time": "2025-05-14 14:32:51.349996"
}
],
可以看到osd6,osd30没有问题,osd7是not queried,判断对应的磁盘之前有出现过异常。
前往对应盘所在主机的hdm查看,确实存在磁盘故障预告警。
现场联系存储侧排查,确实存在问题,存储侧建议尽快更换磁盘。
目前先等存储自动repair修复,修复后,ceph -s健康度恢复ok,后续尽早更换出现预告警磁盘。
该案例暂时没有网友评论
✖
案例意见反馈
亲~登录后才可以操作哦!
确定你的邮箱还未认证,请认证邮箱或绑定手机后进行当前操作