有个rabbit报错,在故障节点上。Error syncing pod, skipping: failed to "StartContainer" for "rabbit1" with CrashLoopBackOff: "Back-off 10s restarting failed cOntainer=rabbit1 pod=rabbit1rc-qdd6n_default(4d4b9b2c-ba2f-11f0-9b26-c2ab20da4d30)"
6m 6m 1 kubelet, 19.202.189.135 spec.containers{rabbit1} Normal Created Created container with id 54b2d39397b1402ad2c17268f165107d166e4a90f364990273dda4d6e4b73547
6m 6m 1 kubelet, 19.202.189.135 spec.containers{rabbit1} Normal Started Started container with id 54b2d39397b1402ad2c17268f165107d166e4a90f364990273dda4d6e4b73547
5m 5m 2 kubelet, 19.202.189.135 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "rabbit1" with CrashLoopBackOff: "Back-off 20s restarting failed cOntainer=rabbit1 pod=rabbit1rc-qdd6n_default(4d4b9b2c-ba2f-11f0-9b26-c2ab20da4d30)"
5m 5m 1 kubelet, 19.202.189.135 spec.containers{rabbit1} Normal Created Created container with id b176f9c230cb784c3b152dfeef7aff2cc2b08f7285b7db5bc822a81a3729ce7d
5m 5m 1 kubelet, 19.202.189.135 spec.containers{rabbit1} Normal Started Started container with id b176f9c230cb784c3b152dfeef7aff2cc2b08f7285b7db5bc822a81a3729ce7d
4m 4m 3 kubelet, 19.202.189.135 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "rabbit1" with CrashLoopBackOff: "Back-off 40s restarting failed cOntainer=rabbit1 pod=rabbit1rc-qdd6n_default(4d4b9b2c-ba2f-11f0-9b26-c2ab20da4d30)"
3m 3m 1 kubelet, 19.202.189.135 spec.containers{rabbit1} Normal Started Started container with id f970427bcb7b8a49afe565a6a2abb1590f3a92f2069eff4268e55caca53cab50
3m 3m 1 kubelet, 19.202.189.135 spec.containers{rabbit1} Normal Created Created container with id f970427bcb7b8a49afe565a6a2abb1590f3a92f2069eff4268e55caca53cab50
3m 2m 6 kubelet, 19.202.189.135 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "rabbit1" with CrashLoopBackOff: "Back-off 1m20s restarting failed cOntainer=rabbit1 pod=rabbit1rc-qdd6n_default(4d4b9b2c-ba2f-11f0-9b26-c2ab20da4d30)"
7m 1m 6 kubelet, 19.202.189.135 spec.containers{rabbit1} Normal Pulled Container image "cloud-base/rabbitmq-3.6.5:E3106H01-V300R001B01D029-RC3" already present on machine
1m 1m 1 kubelet, 19.202.189.135 spec.containers{rabbit1} Normal Created Created container with id 69d01cbf2521a80ead1b75b07ae91f1d0d6621e0b3c41c57b0f5339608deab44
1m 1m 1 kubelet, 19.202.189.135 spec.containers{rabbit1} Normal Started Started container with id 69d01cbf2521a80ead1b75b07ae91f1d0d6621e0b3c41c57b0f5339608deab44
6m 3s 20 kubelet, 19.202.189.135 spec.containers{rabbit1} Warning BackOff Back-off restarting failed container
1m 3s 7 kubelet, 19.202.189.135 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "rabbit1" with CrashLoopBackOff: "Back-off 2m40s restarting failed cOntainer=rabbit1 pod=rabbit1rc-qdd6n_default(4d4b9b2c-ba2f-11f0-9b26-c2ab20da4d30)"
kubectl logs -f rabbit1rc-qdd6n -c rabbit1 -n default
/var/lib/rabbitmq目录损坏、配置文件语法错误、依赖服务未就绪等)。kubectl describe pvc <rabbitmq-pvc-name> -n default
kubectl delete pod rabbit1rc-qdd6n -n default
cloud-base/rabbitmq-3.6.5:E3106H01-V300R001B01D029-RC3官方适配镜像。kubectl get pods -n default
web、db、cache类 Pod 的Running状态与重启次数。kubectl logs -f <web-pod-name> -n default
RabbitMQ connection failed等。kubectl exec -it <web-pod-name> -n default -- rm -rf /var/cache/nginx/*
kubectl delete pod <web-pod-name> -n default
kubectl get deployments,statefulsets -n default
READY状态为1/1。kubectl logs <pod-name> -n default)kubectl top nodes)sorb-rc-jrpsf 这个是什么组件,这个也是一样状态--
sorb-rc-jrpsf 这个是什么组件,这个也是一样状态--
亲~登录后才可以操作哦!
确定你的邮箱还未认证,请认证邮箱或绑定手机后进行当前操作
举报
×
侵犯我的权益
×
侵犯了我企业的权益
×
抄袭了我的内容
×
原文链接或出处
诽谤我
×
对根叔社区有害的内容
×
不规范转载
×
举报说明