最新消息:

readhat上的hung_task_timeout_secs参数

IO admin 3965浏览 0评论

这是一个在客户现场碰到的问题,问题很简单,但是之前没有碰到过,大概是在readhat上装数据库较少吧,记录一下:
客户有一台服务器,安装了VMW软件做了虚拟化,在其上搭建了一台readhat虚拟机,起初给的内存为16G,在添加了12G的内存后,将虚拟机的内存调整到了20G
调整完后主机这边就一直报错:

INFO: task qemu-kvm:32289 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
qemu-kvm      D 0000000000000003     0 32289      1 0x00000080
 ffff88027a3cdc88 0000000000000086 ffff88027a3cdc18 ffffffff8109be2f
 ffffffff81ed1878 ffff8802450c2ae0 ffff88027a3cdc38 0000000000000282
 ffff8802450c3098 ffff88027a3cdfd8 000000000000fb88 ffff8802450c3098
Call Trace:
 [<ffffffff8109be2f>] ? hrtimer_try_to_cancel+0x3f/0xd0
 [<ffffffff810aac17>] ? futex_wait+0x227/0x380
 [<ffffffff8150ed3e>] __mutex_lock_slowpath+0x13e/0x180
 [<ffffffff8150ebdb>] mutex_lock+0x2b/0x50
 [<ffffffff8111c381>] generic_file_aio_write+0x71/0x100
 [<ffffffffa0088fb1>] ext4_file_write+0x61/0x1e0 [ext4]
 [<ffffffff81180c9a>] do_sync_write+0xfa/0x140
 [<ffffffff81086fc2>] ? send_signal+0x42/0x80
 [<ffffffff81096c80>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff810873d6>] ? group_send_sig_info+0x56/0x70
 [<ffffffff8108742f>] ? kill_pid_info+0x3f/0x60
 [<ffffffff8121baf6>] ? security_file_permission+0x16/0x20
 [<ffffffff81180f98>] vfs_write+0xb8/0x1a0
 [<ffffffff81181952>] sys_pwrite64+0x82/0xa0
 [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b

查询了资料后对于该参数的了解为后台对进行的任务由于超时而挂起
从以上的报错信息也给出了简单的解决方案,就是禁止该120秒的超时:echo 0 > /proc/sys/kernel/hung_task_timeout_secs
随后询问了主机工程师:给出方案是按照告警里的提示将该提醒disable

后续询问后给出如下解释:
This is a know bug. By default Linux uses up to 40% of the available memory for file system caching.
After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous.
For flushing out this data to disk this there is a time limit of 120 seconds by default.
In the case here the IO subsystem is not fast enough to flush the data withing 120 seconds.
This especially happens on systems with a lof of memory.

The problem is solved in later kernels and there is not “fix” from Oracle.
I fixed this by lowering the mark for flushing the cache from 40% to 10% by setting “vm.dirty_ratio=10″ in /etc/sysctl.conf.
This setting does not influence overall database performance since you hopefully use Direct IO and bypass the file system cache completely.
告知是linux会设置40%的可用内存用来做系统cache,当flush数据时这40%内存中的数据由于和IO同步问题导致超时(120s),所将40%减小到10%,避免超时。

转载请注明:爱开源 » readhat上的hung_task_timeout_secs参数

您必须 登录 才能发表评论!