Linux command of the day: slabtop
I was a bit perplexed today to understand how 11.6GB of 16GB of RAM were used up on a production box as indicated by free
, top
, and htop
. Yes, I know the kernel uses RAM for buffers and cache- I’m not including that. What stuck me as odd is no process on the box had more than a 10 MB resident set size. However, while used memory shown by these tools might not include the page cache, it does include many other things, notably the kernel slab cache.
This brings us to today’s tool of the day: slabtop
. Once I brushed off the cobwebs and remembered this tool existed, I quickly found the source of my memory “problem”:
$ slabtop -o -s c | head -n15
Active / Total Objects (% used) : 14740085 / 16754440 (88.0%)
Active / Total Slabs (% used) : 901878 / 901878 (100.0%)
Active / Total Caches (% used) : 67 / 90 (74.4%)
Active / Total Size (% used) : 10704372.77K / 11666603.83K (91.8%)
Minimum / Average / Maximum Object : 0.01K / 0.70K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
11418016 10634618 93% 0.94K 671648 17 10746368K ext4_inode_cache
4220790 3221268 76% 0.19K 200990 21 803960K dentry
868140 695855 80% 0.10K 22260 39 89040K buffer_head
113876 76792 67% 0.55K 4067 28 65072K radix_tree_node
4725 4198 88% 0.62K 189 25 3024K inode_cache
17040 17040 100% 0.13K 568 30 2272K ext4_groupinfo_4k
37825 21066 55% 0.05K 445 85 1780K shared_policy_node
19380 19380 100% 0.08K 380 51 1520K sysfs_dir_cache
Yes, that is a 10.2 GB ext4 inode cache. In reality, this is no big deal as any memory pressure will free up space in this cache as necessary, but it is no surprise that having over 10 million inodes cached takes a bit of space. This particular machine has a disk full of almost 12 million data files, and I had just run some operations over them, so no real shocker that the kernel decided to cache the inodes. Nothing at all to worry about here, even though the figures reported by free
and friends look a bit crazy.
For reference, on a desktop machine without absurd amounts of files, the top results in slabtop are a bit more sane:
$ slabtop -o -s c | head -n 15
Active / Total Objects (% used) : 1734446 / 1975788 (87.8%)
Active / Total Slabs (% used) : 77080 / 77080 (100.0%)
Active / Total Caches (% used) : 79 / 103 (76.7%)
Active / Total Size (% used) : 670769.20K / 707993.47K (94.7%)
Minimum / Average / Maximum Object : 0.01K / 0.36K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
511258 511234 99% 0.94K 30074 17 481184K ext4_inode_cache
442848 442651 99% 0.19K 21088 21 84352K dentry
723372 527846 72% 0.10K 18548 39 74192K buffer_head
73052 47805 65% 0.55K 2609 28 41744K radix_tree_node
100810 92751 92% 0.05K 1186 85 4744K shared_policy_node
5575 4796 86% 0.62K 223 25 3568K inode_cache
4066 3434 84% 0.83K 214 19 3424K shmem_inode_cache
9360 9278 99% 0.25K 585 16 2340K kmalloc-256
See Also
- Slicehost kernel upgrade - November 1, 2009
- Eee Kernel 2.6.31.2-1 Update - October 6, 2009
- Arch Kernel Eee 2.6.31 built - September 9, 2009
- Arch Eee Repository - April 1, 2008
- Making things IPv6 capable - June 8, 2011