• 全部
  • 经验案例
  • 典型配置
  • 技术公告
  • FAQ
  • 漏洞说明
  • 全部
  • 全部
  • 大数据引擎
  • 知了引擎
产品线
搜索
取消
案例类型
发布者
是否解决
是否官方
时间
搜索引擎
匹配模式
高级搜索

【MVS】TiDB安装维护过程提示"error reading server preface: http2: frame too large\""}

2025-02-26 发表
  • 0关注
  • 0收藏 822浏览
粉丝:2人 关注:5人

组网及说明

使用TiUP在线部署TiDB Cluster ,涉及如下版本V6.1.0、V6.1..5、V8.5.1


告警信息

1、安全启动报错,无法显示临时密码。

但查看集群已正常启动;后通过普通启动进入数据库重置密码。

# tiup cluster start tidb-test –init

…………………………

+ [ Serial ] - UpdateTopology: cluster=tidb-test

{"level":"warn","ts":"2025-02-24T17:02:50.517+0800","logger":"etcd-client","caller":"v3@v3.5.7/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00011c1c0/192.168.169.41:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"error reading server preface: http2: frame too large\""}

Error: context deadline exceeded

Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2025-02-24-17-02-50.log.

2、修改cluster参数重新加载提示如下错误:

tiup cluster reload tidb-test

---------------------

+ [ Serial ] - UpdateTopology: cluster=tidb-test

{"level":"warn","ts":"2025-02-26T10:14:05.749+0800","logger":"etcd-client","caller":"v3@v3.5.7/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000568000/192.168.169.41:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"error reading server preface: http2: frame too large\""}

 


问题描述

1、安全启动初始化数据库报错异常终止,不能显示临时密码; 

2、日常我i胡操作报错,如修改集群参数等;

 


过程分析

1.在连接互联网情况,提示""error reading server preface: http2: frame too large",

tiup cluster reload tidb-test


  • [ Serial ] - UpdateTopology: cluster=tidb-test

{“level”:“warn”,“ts”:“2025-02-26T10:14:05.749+0800”,“logger”:“etcd-client”,“caller”:“v3@v3.5.7/retry_interceptor.go:62”,“msg”:“retrying of unary invoker failed”,“target”:“etcd-endpoints://0xc000568000/192.168.169.41:2379”,“attempt”:0,“error”:“rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = "error reading server preface: http2: frame too large"”}

Error: context deadline exceeded

Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2025-02-26-10-14-05.log.

在debug log中涉及到代理IP地址信息和需访问的官网信息,怀疑代理网络对frame重新封装导致;

[root@tidbcluster tikv-20160]# cat /root/.tiup/logs/tiup-cluster-debug-2025-02-26-10-14-05.log
2025-02-26T10:13:48.848+0800 INFO Execute command {“command”: “tiup cluster reload tidb-test”}
2025-02-26T10:13:48.848+0800 DEBUG Environment variables {“env”: [“TIUP_HOME=/root/.tiup”, “TIUP_USER_INPUT_VERSION=”, “TIUP_VERSION=1.16.1”, “TIUP_COMPONENT_DATA_DIR=/root/.tiup/storage/cluster”, “TIUP_COMPONENT_INSTALL_DIR=/root/.tiup/components/cluster/v1.16.1”, “TIUP_TELEMETRY_STATUS=”, “TIUP_TELEMETRY_UUID=”, “TIUP_TELEMETRY_SECRET=”, “TIUP_WORK_DIR=/tidb-deploy”, “TIUP_TAG=Uds7nX3”, “TIUP_INSTANCE_DATA_DIR=/root/.tiup/data/Uds7nX3”, “XDG_SESSION_ID=946”, “HOSTNAME=tidbcluster”, “TERM=xterm”, “SHELL=/bin/bash”, “HISTSIZE=1000”, “SSH_CLIENT=192.168.110.7 54324 22”, “OLDPWD=/root”, “SSH_TTY=/dev/pts/0”, “http_proxy=192.168.118.199:808”, “USER=root”, “LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:.tar=01;31:.tgz=01;31:.arc=01;31:.arj=01;31:.taz=01;31:.lha=01;31:.lz4=01;31:.lzh=01;31:.lzma=01;31:.tlz=01;31:.txz=01;31:.tzo=01;31:.t7z=01;31:.zip=01;31:.z=01;31:.Z=01;31:.dz=01;31:.gz=01;31:.lrz=01;31:.lz=01;31:.lzo=01;31:.xz=01;31:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.alz=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.cab=01;31:.jpg=01;35:.jpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.axv=01;35:.anx=01;35:.ogv=01;35:.ogx=01;35:.aac=01;36:.au=01;36:.flac=01;36:.mid=01;36:.midi=01;36:.mka=01;36:.mp3=01;36:.mpc=01;36:.ogg=01;36:.ra=01;36:.wav=01;36:.axa=01;36:.oga=01;36:.spx=01;36:*.xspf=01;36:”, “MAIL=/var/spool/mail/root”, “PATH=/root/.tiup/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin”, “PWD=/tidb-deploy”, “LANG=en_US.UTF-8”, “https_proxy=192.168.118.199:808”, “HISTCOnTROL=ignoredups”, “SHLVL=1”, “HOME=/root”, “LOGNAME=root”, “XDG_DATA_DIRS=/root/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share”, “SSH_COnNECTION=192.168.110.7 54324 192.168.169.41 22”, “LESSOPEN=||/usr/bin/lesspipe.sh %s”, “XDG_RUNTIME_DIR=/run/user/0”, “DISPLAY=localhost:10.0”, “_=/root/.tiup/bin/tiup”, “TIUP_TELEMETRY_EVENT_UUID=7f9c9071-4737-4ef1-8df9-4c2bf9b77769”, “TIUP_MIRRORS=***.***”]}
2025-02-26T10:13:48.856+0800 DEBUG Initialize repository finished {“duration”: “7.248103ms”}
2025-02-26T10:13:55.745+0800 INFO + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub”}
2025-02-26T10:13:55.745+0800 DEBUG TaskFinish {“task”: “SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub”}
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “UserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 INFO + [Parallel] - UserSSH: user=tidb, host=192.168.169.41
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskFinish {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 INFO + [Parallel] - UserSSH: user=tidb, host=192.168.169.41
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskFinish {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 INFO + [Parallel] - UserSSH: user=tidb, host=192.168.169.41
2025-02-26T10:13:55.745+0800 INFO + [Parallel] - UserSSH: user=tidb, host=192.168.169.41
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskFinish {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 INFO + [Parallel] - UserSSH: user=tidb, host=192.168.169.41
2025-02-26T10:13:55.745+0800 INFO + [Parallel] - UserSSH: user=tidb, host=192.168.169.41
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskFinish {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 INFO + [Parallel] - UserSSH: user=tidb, host=192.168.169.41
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskFinish {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 INFO + [Parallel] - UserSSH: user=tidb, host=192.168.169.41
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskFinish {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskBegin {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskFinish {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.745+0800 DEBUG TaskFinish {“task”: “UserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.746+0800 DEBUG TaskFinish {“task”: “UserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41\nUserSSH: user=tidb, host=192.168.169.41”}
2025-02-26T10:13:55.746+0800 INFO + [ Serial ] - UpdateTopology: cluster=tidb-test
2025-02-26T10:13:55.746+0800 DEBUG TaskBegin {“task”: “UpdateTopology: cluster=tidb-test”}
2025-02-26T10:14:05.749+0800 DEBUG TaskFinish {“task”: “UpdateTopology: cluster=tidb-test”, “error”: “context deadline exceeded”}
2025-02-26T10:14:05.749+0800 INFO Execute command finished {“code”: 1, “error”: “context deadline exceeded”, “errorVerbose”: “context deadline exceeded\ngithub.com/pingcap/errors.AddStack\n\tgithub.com/pingcap/errors@v0.11.5-0.20201126102027-b0a155152ca3/errors.go:174\ngithub.com/pingcap/errors.Trace\n\tgithub.com/pingcap/errors@v0.11.5-0.20201126102027-b0a155152ca3/juju_adaptor.go:15\ngithub.com/pingcap/tiup/pkg/cluster/manager.(*Manager).Reload\n\tgithub.com/pingcap/tiup/pkg/cluster/manager/reload.go:143\ngithub.com/pingcap/tiup/components/cluster/command.newReloadCmd.func1\n\tgithub.com/pingcap/tiup/components/cluster/command/reload.go:40\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v1.6.1/command.go:916\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v1.6.1/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v1.6.1/command.go:968\ngithub.com/pingcap/tiup/components/cluster/command.Execute\n\tgithub.com/pingcap/tiup/components/cluster/command/root.go:297\nmain.main\n\tgithub.com/pingcap/tiup/components/cluster/main.go:23\nruntime.main\n\truntime/proc.go:267\nruntime.goexit\n\truntime/asm_amd64.s:1650”}
[root@tidbcluster tikv-20160]#

 

2. 在断掉互联网,重新加载配置执行“tiup cluster reload tidb-test”命令时提示需从官网获取timestamp.json文件;

Error: init config failed: 192.168.169.41:20160: fetch /timestamp.json from mirror(***.***) failed: download from ***.***/timestamp.json failed: Get "***.***/timestamp.json": dial tcp: lookup ***.*** on 192.168.1.27:53: read udp 192.168.169.41:41274->192.168.1.27:53: i/o timeout: check config failed

 

3.由此判断,tiup 重新加载参数,需要访问timestamp.json文件。timestamp文件作用:

向镜像请求 timestamp.json,并使用 root.json 中记录的公钥验证该文件是否合法;

检查 timestamp.json 中记录的 snapshot.json 的 checksum 和本地的 snapshot.json 的 checksum 是否吻合;若不吻合,则向镜像请求最新的 snapshot.json 并使用 root.json 中记录的公钥验证该文件是否合法

***.***/zh/tidb/stable/tiup-mirror-reference#%E5%AE%A2%E6%88%B7%E7%AB%AF%E5%B7%A5%E4%BD%9C%E6%B5%81%E7%A8%8B

4. tiup重新加载参数文件等维护操作需要访问tiup mirror校验环境合法性和安全性,本实验环境通过代理上网对frame封装包符合应用规范,导致问题出现。

 

 

 


解决方法

解决办法:设置本地镜像,并关闭外网连接。修改tikv参数重新加载无报错。

经测试:多机标准部署V6.1.0 和单机部署V8.5.1 均以解决。

1.下载tiup本地源,将本环境指向本地源;

# tiup mirror clone /tidb-data/tiupmirror v8.5.1 --arch amd64, --os linux # tiup mirror set /tidb-data/tiupmirror

2.重新执行参数加载操作成功完成。日志如下:

[root@tidbcluster ~]# tiup cluster reload tidb-test Will reload the cluster tidb-test with restart policy is true, nodes: , roles: . Do you want to continue? [y/N]:(default=N) y + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=192.168.169.41 + [Parallel] - UserSSH: user=tidb, host=192.168.169.41 + [Parallel] - UserSSH: user=tidb, host=192.168.169.41 + [Parallel] - UserSSH: user=tidb, host=192.168.169.41 + [Parallel] - UserSSH: user=tidb, host=192.168.169.41 + [Parallel] - UserSSH: user=tidb, host=192.168.169.41 + [Parallel] - UserSSH: user=tidb, host=192.168.169.41 + [Parallel] - UserSSH: user=tidb, host=192.168.169.41 + [ Serial ] - UpdateTopology: cluster=tidb-test + Refresh instance configs - Generate config pd -> 192.168.169.41:2379 ... Done - Generate config tikv -> 192.168.169.41:20160 ... Done - Generate config tikv -> 192.168.169.41:20161 ... Done - Generate config tikv -> 192.168.169.41:20162 ... Done - Generate config tidb -> 192.168.169.41:4000 ... Done - Generate config tiflash -> 192.168.169.41:9000 ... Done - Generate config prometheus -> 192.168.169.41:9090 ... Done - Generate config grafana -> 192.168.169.41:3000 ... Done + Refresh monitor configs - Generate config node_exporter -> 192.168.169.41 ... Done - Generate config blackbox_exporter -> 192.168.169.41 ... Done + [ Serial ] - Upgrade Cluster Upgrading component tiflash Restarting instance 192.168.169.41:9000 Restart instance 192.168.169.41:9000 success Upgrading component pd Restarting instance 192.168.169.41:2379 Restart instance 192.168.169.41:2379 success Upgrading component tikv Evicting 2 leaders from store 192.168.169.41:20160... Still waiting for 2 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Still waiting for 1 store leaders to transfer... Restarting instance 192.168.169.41:20160 Restart instance 192.168.169.41:20160 success Evicting 4 leaders from store 192.168.169.41:20161... Still waiting for 4 store leaders to transfer... Restarting instance 192.168.169.41:20161 Restart instance 192.168.169.41:20161 success Evicting 5 leaders from store 192.168.169.41:20162... Still waiting for 5 store leaders to transfer... Still waiting for 5 store leaders to transfer... Restarting instance 192.168.169.41:20162 Restart instance 192.168.169.41:20162 success Upgrading component tidb Restarting instance 192.168.169.41:4000 Restart instance 192.168.169.41:4000 success Upgrading component prometheus Restarting instance 192.168.169.41:9090 Restart instance 192.168.169.41:9090 success Upgrading component grafana Restarting instance 192.168.169.41:3000 Restart instance 192.168.169.41:3000 success Stopping component node_exporter Stopping instance 192.168.169.41 Stop 192.168.169.41 success Stopping component blackbox_exporter Stopping instance 192.168.169.41 Stop 192.168.169.41 success Starting component node_exporter Starting instance 192.168.169.41 Start 192.168.169.41 success Starting component blackbox_exporter Starting instance 192.168.169.41 Start 192.168.169.41 success Reloaded cluster `tidb-test` successfully

参考:

搭建私有镜像

***.***/zh/tidb/stable/tiup-mirror#%E6%90%AD%E5%BB%BA%E7%A7%81%E6%9C%89%E9%95%9C%E5%83%8F

 


该案例对您是否有帮助:

您的评价:1

若您有关于案例的建议,请反馈:

0 个评论

该案例暂时没有网友评论

编辑评论

举报

×

侵犯我的权益 >
对根叔知了社区有害的内容 >
辱骂、歧视、挑衅等(不友善)

侵犯我的权益

×

泄露了我的隐私 >
侵犯了我企业的权益 >
抄袭了我的内容 >
诽谤我 >
辱骂、歧视、挑衅等(不友善)
骚扰我

泄露了我的隐私

×

您好,当您发现根叔知了上有泄漏您隐私的内容时,您可以向根叔知了进行举报。 请您把以下内容通过邮件发送到pub.zhiliao@h3c.com 邮箱,我们会尽快处理。
  • 1. 您认为哪些内容泄露了您的隐私?(请在邮件中列出您举报的内容、链接地址,并给出简短的说明)
  • 2. 您是谁?(身份证明材料,可以是身份证或护照等证件)

侵犯了我企业的权益

×

您好,当您发现根叔知了上有关于您企业的造谣与诽谤、商业侵权等内容时,您可以向根叔知了进行举报。 请您把以下内容通过邮件发送到 pub.zhiliao@h3c.com 邮箱,我们会在审核后尽快给您答复。
  • 1. 您举报的内容是什么?(请在邮件中列出您举报的内容和链接地址)
  • 2. 您是谁?(身份证明材料,可以是身份证或护照等证件)
  • 3. 是哪家企业?(营业执照,单位登记证明等证件)
  • 4. 您与该企业的关系是?(您是企业法人或被授权人,需提供企业委托授权书)
我们认为知名企业应该坦然接受公众讨论,对于答案中不准确的部分,我们欢迎您以正式或非正式身份在根叔知了上进行澄清。

抄袭了我的内容

×

原文链接或出处

诽谤我

×

您好,当您发现根叔知了上有诽谤您的内容时,您可以向根叔知了进行举报。 请您把以下内容通过邮件发送到pub.zhiliao@h3c.com 邮箱,我们会尽快处理。
  • 1. 您举报的内容以及侵犯了您什么权益?(请在邮件中列出您举报的内容、链接地址,并给出简短的说明)
  • 2. 您是谁?(身份证明材料,可以是身份证或护照等证件)
我们认为知名企业应该坦然接受公众讨论,对于答案中不准确的部分,我们欢迎您以正式或非正式身份在根叔知了上进行澄清。

对根叔知了社区有害的内容

×

垃圾广告信息
色情、暴力、血腥等违反法律法规的内容
政治敏感
不规范转载 >
辱骂、歧视、挑衅等(不友善)
骚扰我
诱导投票

不规范转载

×

举报说明

提出建议

    +

亲~登录后才可以操作哦!

确定

亲~检测到您登陆的账号未在http://hclhub.h3c.com进行注册

注册后可访问此模块

跳转hclhub

你的邮箱还未认证,请认证邮箱或绑定手机后进行当前操作