Based on kernel version 4.1. Page generated on 2015-06-28 12:08 EST.
1 Memory Resource Controller 2 3 NOTE: This document is hopelessly outdated and it asks for a complete 4 rewrite. It still contains a useful information so we are keeping it 5 here but make sure to check the current code if you need a deeper 6 understanding. 7 8 NOTE: The Memory Resource Controller has generically been referred to as the 9 memory controller in this document. Do not confuse memory controller 10 used here with the memory controller that is used in hardware. 11 12 (For editors) 13 In this document: 14 When we mention a cgroup (cgroupfs's directory) with memory controller, 15 we call it "memory cgroup". When you see git-log and source code, you'll 16 see patch's title and function names tend to use "memcg". 17 In this document, we avoid using it. 18 19 Benefits and Purpose of the memory controller 20 21 The memory controller isolates the memory behaviour of a group of tasks 22 from the rest of the system. The article on LWN  mentions some probable 23 uses of the memory controller. The memory controller can be used to 24 25 a. Isolate an application or a group of applications 26 Memory-hungry applications can be isolated and limited to a smaller 27 amount of memory. 28 b. Create a cgroup with a limited amount of memory; this can be used 29 as a good alternative to booting with mem=XXXX. 30 c. Virtualization solutions can control the amount of memory they want 31 to assign to a virtual machine instance. 32 d. A CD/DVD burner could control the amount of memory used by the 33 rest of the system to ensure that burning does not fail due to lack 34 of available memory. 35 e. There are several other use cases; find one or use the controller just 36 for fun (to learn and hack on the VM subsystem). 37 38 Current Status: linux-2.6.34-mmotm(development version of 2010/April) 39 40 Features: 41 - accounting anonymous pages, file caches, swap caches usage and limiting them. 42 - pages are linked to per-memcg LRU exclusively, and there is no global LRU. 43 - optionally, memory+swap usage can be accounted and limited. 44 - hierarchical accounting 45 - soft limit 46 - moving (recharging) account at moving a task is selectable. 47 - usage threshold notifier 48 - memory pressure notifier 49 - oom-killer disable knob and oom-notifier 50 - Root cgroup has no limit controls. 51 52 Kernel memory support is a work in progress, and the current version provides 53 basically functionality. (See Section 2.7) 54 55 Brief summary of control files. 56 57 tasks # attach a task(thread) and show list of threads 58 cgroup.procs # show list of processes 59 cgroup.event_control # an interface for event_fd() 60 memory.usage_in_bytes # show current usage for memory 61 (See 5.5 for details) 62 memory.memsw.usage_in_bytes # show current usage for memory+Swap 63 (See 5.5 for details) 64 memory.limit_in_bytes # set/show limit of memory usage 65 memory.memsw.limit_in_bytes # set/show limit of memory+Swap usage 66 memory.failcnt # show the number of memory usage hits limits 67 memory.memsw.failcnt # show the number of memory+Swap hits limits 68 memory.max_usage_in_bytes # show max memory usage recorded 69 memory.memsw.max_usage_in_bytes # show max memory+Swap usage recorded 70 memory.soft_limit_in_bytes # set/show soft limit of memory usage 71 memory.stat # show various statistics 72 memory.use_hierarchy # set/show hierarchical account enabled 73 memory.force_empty # trigger forced move charge to parent 74 memory.pressure_level # set memory pressure notifications 75 memory.swappiness # set/show swappiness parameter of vmscan 76 (See sysctl's vm.swappiness) 77 memory.move_charge_at_immigrate # set/show controls of moving charges 78 memory.oom_control # set/show oom controls. 79 memory.numa_stat # show the number of memory usage per numa node 80 81 memory.kmem.limit_in_bytes # set/show hard limit for kernel memory 82 memory.kmem.usage_in_bytes # show current kernel memory allocation 83 memory.kmem.failcnt # show the number of kernel memory usage hits limits 84 memory.kmem.max_usage_in_bytes # show max kernel memory usage recorded 85 86 memory.kmem.tcp.limit_in_bytes # set/show hard limit for tcp buf memory 87 memory.kmem.tcp.usage_in_bytes # show current tcp buf memory allocation 88 memory.kmem.tcp.failcnt # show the number of tcp buf memory usage hits limits 89 memory.kmem.tcp.max_usage_in_bytes # show max tcp buf memory usage recorded 90 91 1. History 92 93 The memory controller has a long history. A request for comments for the memory 94 controller was posted by Balbir Singh . At the time the RFC was posted 95 there were several implementations for memory control. The goal of the 96 RFC was to build consensus and agreement for the minimal features required 97 for memory control. The first RSS controller was posted by Balbir Singh 98 in Feb 2007. Pavel Emelianov  has since posted three versions of the 99 RSS controller. At OLS, at the resource management BoF, everyone suggested 100 that we handle both page cache and RSS together. Another request was raised 101 to allow user space handling of OOM. The current memory controller is 102 at version 6; it combines both mapped (RSS) and unmapped Page 103 Cache Control . 104 105 2. Memory Control 106 107 Memory is a unique resource in the sense that it is present in a limited 108 amount. If a task requires a lot of CPU processing, the task can spread 109 its processing over a period of hours, days, months or years, but with 110 memory, the same physical memory needs to be reused to accomplish the task. 111 112 The memory controller implementation has been divided into phases. These 113 are: 114 115 1. Memory controller 116 2. mlock(2) controller 117 3. Kernel user memory accounting and slab control 118 4. user mappings length controller 119 120 The memory controller is the first controller developed. 121 122 2.1. Design 123 124 The core of the design is a counter called the page_counter. The 125 page_counter tracks the current memory usage and limit of the group of 126 processes associated with the controller. Each cgroup has a memory controller 127 specific data structure (mem_cgroup) associated with it. 128 129 2.2. Accounting 130 131 +--------------------+ 132 | mem_cgroup | 133 | (page_counter) | 134 +--------------------+ 135 / ^ \ 136 / | \ 137 +---------------+ | +---------------+ 138 | mm_struct | |.... | mm_struct | 139 | | | | | 140 +---------------+ | +---------------+ 141 | 142 + --------------+ 143 | 144 +---------------+ +------+--------+ 145 | page +----------> page_cgroup| 146 | | | | 147 +---------------+ +---------------+ 148 149 (Figure 1: Hierarchy of Accounting) 150 151 152 Figure 1 shows the important aspects of the controller 153 154 1. Accounting happens per cgroup 155 2. Each mm_struct knows about which cgroup it belongs to 156 3. Each page has a pointer to the page_cgroup, which in turn knows the 157 cgroup it belongs to 158 159 The accounting is done as follows: mem_cgroup_charge_common() is invoked to 160 set up the necessary data structures and check if the cgroup that is being 161 charged is over its limit. If it is, then reclaim is invoked on the cgroup. 162 More details can be found in the reclaim section of this document. 163 If everything goes well, a page meta-data-structure called page_cgroup is 164 updated. page_cgroup has its own LRU on cgroup. 165 (*) page_cgroup structure is allocated at boot/memory-hotplug time. 166 167 2.2.1 Accounting details 168 169 All mapped anon pages (RSS) and cache pages (Page Cache) are accounted. 170 Some pages which are never reclaimable and will not be on the LRU 171 are not accounted. We just account pages under usual VM management. 172 173 RSS pages are accounted at page_fault unless they've already been accounted 174 for earlier. A file page will be accounted for as Page Cache when it's 175 inserted into inode (radix-tree). While it's mapped into the page tables of 176 processes, duplicate accounting is carefully avoided. 177 178 An RSS page is unaccounted when it's fully unmapped. A PageCache page is 179 unaccounted when it's removed from radix-tree. Even if RSS pages are fully 180 unmapped (by kswapd), they may exist as SwapCache in the system until they 181 are really freed. Such SwapCaches are also accounted. 182 A swapped-in page is not accounted until it's mapped. 183 184 Note: The kernel does swapin-readahead and reads multiple swaps at once. 185 This means swapped-in pages may contain pages for other tasks than a task 186 causing page fault. So, we avoid accounting at swap-in I/O. 187 188 At page migration, accounting information is kept. 189 190 Note: we just account pages-on-LRU because our purpose is to control amount 191 of used pages; not-on-LRU pages tend to be out-of-control from VM view. 192 193 2.3 Shared Page Accounting 194 195 Shared pages are accounted on the basis of the first touch approach. The 196 cgroup that first touches a page is accounted for the page. The principle 197 behind this approach is that a cgroup that aggressively uses a shared 198 page will eventually get charged for it (once it is uncharged from 199 the cgroup that brought it in -- this will happen on memory pressure). 200 201 But see section 8.2: when moving a task to another cgroup, its pages may 202 be recharged to the new cgroup, if move_charge_at_immigrate has been chosen. 203 204 Exception: If CONFIG_MEMCG_SWAP is not used. 205 When you do swapoff and make swapped-out pages of shmem(tmpfs) to 206 be backed into memory in force, charges for pages are accounted against the 207 caller of swapoff rather than the users of shmem. 208 209 2.4 Swap Extension (CONFIG_MEMCG_SWAP) 210 211 Swap Extension allows you to record charge for swap. A swapped-in page is 212 charged back to original page allocator if possible. 213 214 When swap is accounted, following files are added. 215 - memory.memsw.usage_in_bytes. 216 - memory.memsw.limit_in_bytes. 217 218 memsw means memory+swap. Usage of memory+swap is limited by 219 memsw.limit_in_bytes. 220 221 Example: Assume a system with 4G of swap. A task which allocates 6G of memory 222 (by mistake) under 2G memory limitation will use all swap. 223 In this case, setting memsw.limit_in_bytes=3G will prevent bad use of swap. 224 By using the memsw limit, you can avoid system OOM which can be caused by swap 225 shortage. 226 227 * why 'memory+swap' rather than swap. 228 The global LRU(kswapd) can swap out arbitrary pages. Swap-out means 229 to move account from memory to swap...there is no change in usage of 230 memory+swap. In other words, when we want to limit the usage of swap without 231 affecting global LRU, memory+swap limit is better than just limiting swap from 232 an OS point of view. 233 234 * What happens when a cgroup hits memory.memsw.limit_in_bytes 235 When a cgroup hits memory.memsw.limit_in_bytes, it's useless to do swap-out 236 in this cgroup. Then, swap-out will not be done by cgroup routine and file 237 caches are dropped. But as mentioned above, global LRU can do swapout memory 238 from it for sanity of the system's memory management state. You can't forbid 239 it by cgroup. 240 241 2.5 Reclaim 242 243 Each cgroup maintains a per cgroup LRU which has the same structure as 244 global VM. When a cgroup goes over its limit, we first try 245 to reclaim memory from the cgroup so as to make space for the new 246 pages that the cgroup has touched. If the reclaim is unsuccessful, 247 an OOM routine is invoked to select and kill the bulkiest task in the 248 cgroup. (See 10. OOM Control below.) 249 250 The reclaim algorithm has not been modified for cgroups, except that 251 pages that are selected for reclaiming come from the per-cgroup LRU 252 list. 253 254 NOTE: Reclaim does not work for the root cgroup, since we cannot set any 255 limits on the root cgroup. 256 257 Note2: When panic_on_oom is set to "2", the whole system will panic. 258 259 When oom event notifier is registered, event will be delivered. 260 (See oom_control section) 261 262 2.6 Locking 263 264 lock_page_cgroup()/unlock_page_cgroup() should not be called under 265 mapping->tree_lock. 266 267 Other lock order is following: 268 PG_locked. 269 mm->page_table_lock 270 zone->lru_lock 271 lock_page_cgroup. 272 In many cases, just lock_page_cgroup() is called. 273 per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by 274 zone->lru_lock, it has no lock of its own. 275 276 2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM) 277 278 With the Kernel memory extension, the Memory Controller is able to limit 279 the amount of kernel memory used by the system. Kernel memory is fundamentally 280 different than user memory, since it can't be swapped out, which makes it 281 possible to DoS the system by consuming too much of this precious resource. 282 283 Kernel memory won't be accounted at all until limit on a group is set. This 284 allows for existing setups to continue working without disruption. The limit 285 cannot be set if the cgroup have children, or if there are already tasks in the 286 cgroup. Attempting to set the limit under those conditions will return -EBUSY. 287 When use_hierarchy == 1 and a group is accounted, its children will 288 automatically be accounted regardless of their limit value. 289 290 After a group is first limited, it will be kept being accounted until it 291 is removed. The memory limitation itself, can of course be removed by writing 292 -1 to memory.kmem.limit_in_bytes. In this case, kmem will be accounted, but not 293 limited. 294 295 Kernel memory limits are not imposed for the root cgroup. Usage for the root 296 cgroup may or may not be accounted. The memory used is accumulated into 297 memory.kmem.usage_in_bytes, or in a separate counter when it makes sense. 298 (currently only for tcp). 299 The main "kmem" counter is fed into the main counter, so kmem charges will 300 also be visible from the user counter. 301 302 Currently no soft limit is implemented for kernel memory. It is future work 303 to trigger slab reclaim when those limits are reached. 304 305 2.7.1 Current Kernel Memory resources accounted 306 307 * stack pages: every process consumes some stack pages. By accounting into 308 kernel memory, we prevent new processes from being created when the kernel 309 memory usage is too high. 310 311 * slab pages: pages allocated by the SLAB or SLUB allocator are tracked. A copy 312 of each kmem_cache is created every time the cache is touched by the first time 313 from inside the memcg. The creation is done lazily, so some objects can still be 314 skipped while the cache is being created. All objects in a slab page should 315 belong to the same memcg. This only fails to hold when a task is migrated to a 316 different memcg during the page allocation by the cache. 317 318 * sockets memory pressure: some sockets protocols have memory pressure 319 thresholds. The Memory Controller allows them to be controlled individually 320 per cgroup, instead of globally. 321 322 * tcp memory pressure: sockets memory pressure for the tcp protocol. 323 324 2.7.2 Common use cases 325 326 Because the "kmem" counter is fed to the main user counter, kernel memory can 327 never be limited completely independently of user memory. Say "U" is the user 328 limit, and "K" the kernel limit. There are three possible ways limits can be 329 set: 330 331 U != 0, K = unlimited: 332 This is the standard memcg limitation mechanism already present before kmem 333 accounting. Kernel memory is completely ignored. 334 335 U != 0, K < U: 336 Kernel memory is a subset of the user memory. This setup is useful in 337 deployments where the total amount of memory per-cgroup is overcommited. 338 Overcommiting kernel memory limits is definitely not recommended, since the 339 box can still run out of non-reclaimable memory. 340 In this case, the admin could set up K so that the sum of all groups is 341 never greater than the total memory, and freely set U at the cost of his 342 QoS. 343 WARNING: In the current implementation, memory reclaim will NOT be 344 triggered for a cgroup when it hits K while staying below U, which makes 345 this setup impractical. 346 347 U != 0, K >= U: 348 Since kmem charges will also be fed to the user counter and reclaim will be 349 triggered for the cgroup for both kinds of memory. This setup gives the 350 admin a unified view of memory, and it is also useful for people who just 351 want to track kernel memory usage. 352 353 3. User Interface 354 355 3.0. Configuration 356 357 a. Enable CONFIG_CGROUPS 358 b. Enable CONFIG_MEMCG 359 c. Enable CONFIG_MEMCG_SWAP (to use swap extension) 360 d. Enable CONFIG_MEMCG_KMEM (to use kmem extension) 361 362 3.1. Prepare the cgroups (see cgroups.txt, Why are cgroups needed?) 363 # mount -t tmpfs none /sys/fs/cgroup 364 # mkdir /sys/fs/cgroup/memory 365 # mount -t cgroup none /sys/fs/cgroup/memory -o memory 366 367 3.2. Make the new group and move bash into it 368 # mkdir /sys/fs/cgroup/memory/0 369 # echo $$ > /sys/fs/cgroup/memory/0/tasks 370 371 Since now we're in the 0 cgroup, we can alter the memory limit: 372 # echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes 373 374 NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo, 375 mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.) 376 377 NOTE: We can write "-1" to reset the *.limit_in_bytes(unlimited). 378 NOTE: We cannot set limits on the root cgroup any more. 379 380 # cat /sys/fs/cgroup/memory/0/memory.limit_in_bytes 381 4194304 382 383 We can check the usage: 384 # cat /sys/fs/cgroup/memory/0/memory.usage_in_bytes 385 1216512 386 387 A successful write to this file does not guarantee a successful setting of 388 this limit to the value written into the file. This can be due to a 389 number of factors, such as rounding up to page boundaries or the total 390 availability of memory on the system. The user is required to re-read 391 this file after a write to guarantee the value committed by the kernel. 392 393 # echo 1 > memory.limit_in_bytes 394 # cat memory.limit_in_bytes 395 4096 396 397 The memory.failcnt field gives the number of times that the cgroup limit was 398 exceeded. 399 400 The memory.stat file gives accounting information. Now, the number of 401 caches, RSS and Active pages/Inactive pages are shown. 402 403 4. Testing 404 405 For testing features and implementation, see memcg_test.txt. 406 407 Performance test is also important. To see pure memory controller's overhead, 408 testing on tmpfs will give you good numbers of small overheads. 409 Example: do kernel make on tmpfs. 410 411 Page-fault scalability is also important. At measuring parallel 412 page fault test, multi-process test may be better than multi-thread 413 test because it has noise of shared objects/status. 414 415 But the above two are testing extreme situations. 416 Trying usual test under memory controller is always helpful. 417 418 4.1 Troubleshooting 419 420 Sometimes a user might find that the application under a cgroup is 421 terminated by the OOM killer. There are several causes for this: 422 423 1. The cgroup limit is too low (just too low to do anything useful) 424 2. The user is using anonymous memory and swap is turned off or too low 425 426 A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of 427 some of the pages cached in the cgroup (page cache pages). 428 429 To know what happens, disabling OOM_Kill as per "10. OOM Control" (below) and 430 seeing what happens will be helpful. 431 432 4.2 Task migration 433 434 When a task migrates from one cgroup to another, its charge is not 435 carried forward by default. The pages allocated from the original cgroup still 436 remain charged to it, the charge is dropped when the page is freed or 437 reclaimed. 438 439 You can move charges of a task along with task migration. 440 See 8. "Move charges at task migration" 441 442 4.3 Removing a cgroup 443 444 A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a 445 cgroup might have some charge associated with it, even though all 446 tasks have migrated away from it. (because we charge against pages, not 447 against tasks.) 448 449 We move the stats to root (if use_hierarchy==0) or parent (if 450 use_hierarchy==1), and no change on the charge except uncharging 451 from the child. 452 453 Charges recorded in swap information is not updated at removal of cgroup. 454 Recorded information is discarded and a cgroup which uses swap (swapcache) 455 will be charged as a new owner of it. 456 457 About use_hierarchy, see Section 6. 458 459 5. Misc. interfaces. 460 461 5.1 force_empty 462 memory.force_empty interface is provided to make cgroup's memory usage empty. 463 When writing anything to this 464 465 # echo 0 > memory.force_empty 466 467 the cgroup will be reclaimed and as many pages reclaimed as possible. 468 469 The typical use case for this interface is before calling rmdir(). 470 Because rmdir() moves all pages to parent, some out-of-use page caches can be 471 moved to the parent. If you want to avoid that, force_empty will be useful. 472 473 Also, note that when memory.kmem.limit_in_bytes is set the charges due to 474 kernel pages will still be seen. This is not considered a failure and the 475 write will still return success. In this case, it is expected that 476 memory.kmem.usage_in_bytes == memory.usage_in_bytes. 477 478 About use_hierarchy, see Section 6. 479 480 5.2 stat file 481 482 memory.stat file includes following statistics 483 484 # per-memory cgroup local status 485 cache - # of bytes of page cache memory. 486 rss - # of bytes of anonymous and swap cache memory (includes 487 transparent hugepages). 488 rss_huge - # of bytes of anonymous transparent hugepages. 489 mapped_file - # of bytes of mapped file (includes tmpfs/shmem) 490 pgpgin - # of charging events to the memory cgroup. The charging 491 event happens each time a page is accounted as either mapped 492 anon page(RSS) or cache page(Page Cache) to the cgroup. 493 pgpgout - # of uncharging events to the memory cgroup. The uncharging 494 event happens each time a page is unaccounted from the cgroup. 495 swap - # of bytes of swap usage 496 writeback - # of bytes of file/anon cache that are queued for syncing to 497 disk. 498 inactive_anon - # of bytes of anonymous and swap cache memory on inactive 499 LRU list. 500 active_anon - # of bytes of anonymous and swap cache memory on active 501 LRU list. 502 inactive_file - # of bytes of file-backed memory on inactive LRU list. 503 active_file - # of bytes of file-backed memory on active LRU list. 504 unevictable - # of bytes of memory that cannot be reclaimed (mlocked etc). 505 506 # status considering hierarchy (see memory.use_hierarchy settings) 507 508 hierarchical_memory_limit - # of bytes of memory limit with regard to hierarchy 509 under which the memory cgroup is 510 hierarchical_memsw_limit - # of bytes of memory+swap limit with regard to 511 hierarchy under which memory cgroup is. 512 513 total_<counter> - # hierarchical version of <counter>, which in 514 addition to the cgroup's own value includes the 515 sum of all hierarchical children's values of 516 <counter>, i.e. total_cache 517 518 # The following additional stats are dependent on CONFIG_DEBUG_VM. 519 520 recent_rotated_anon - VM internal parameter. (see mm/vmscan.c) 521 recent_rotated_file - VM internal parameter. (see mm/vmscan.c) 522 recent_scanned_anon - VM internal parameter. (see mm/vmscan.c) 523 recent_scanned_file - VM internal parameter. (see mm/vmscan.c) 524 525 Memo: 526 recent_rotated means recent frequency of LRU rotation. 527 recent_scanned means recent # of scans to LRU. 528 showing for better debug please see the code for meanings. 529 530 Note: 531 Only anonymous and swap cache memory is listed as part of 'rss' stat. 532 This should not be confused with the true 'resident set size' or the 533 amount of physical memory used by the cgroup. 534 'rss + file_mapped" will give you resident set size of cgroup. 535 (Note: file and shmem may be shared among other cgroups. In that case, 536 file_mapped is accounted only when the memory cgroup is owner of page 537 cache.) 538 539 5.3 swappiness 540 541 Overrides /proc/sys/vm/swappiness for the particular group. The tunable 542 in the root cgroup corresponds to the global swappiness setting. 543 544 Please note that unlike during the global reclaim, limit reclaim 545 enforces that 0 swappiness really prevents from any swapping even if 546 there is a swap storage available. This might lead to memcg OOM killer 547 if there are no file pages to reclaim. 548 549 5.4 failcnt 550 551 A memory cgroup provides memory.failcnt and memory.memsw.failcnt files. 552 This failcnt(== failure count) shows the number of times that a usage counter 553 hit its limit. When a memory cgroup hits a limit, failcnt increases and 554 memory under it will be reclaimed. 555 556 You can reset failcnt by writing 0 to failcnt file. 557 # echo 0 > .../memory.failcnt 558 559 5.5 usage_in_bytes 560 561 For efficiency, as other kernel components, memory cgroup uses some optimization 562 to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the 563 method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz 564 value for efficient access. (Of course, when necessary, it's synchronized.) 565 If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP) 566 value in memory.stat(see 5.2). 567 568 5.6 numa_stat 569 570 This is similar to numa_maps but operates on a per-memcg basis. This is 571 useful for providing visibility into the numa locality information within 572 an memcg since the pages are allowed to be allocated from any physical 573 node. One of the use cases is evaluating application performance by 574 combining this information with the application's CPU allocation. 575 576 Each memcg's numa_stat file includes "total", "file", "anon" and "unevictable" 577 per-node page counts including "hierarchical_<counter>" which sums up all 578 hierarchical children's values in addition to the memcg's own value. 579 580 The output format of memory.numa_stat is: 581 582 total=<total pages> N0=<node 0 pages> N1=<node 1 pages> ... 583 file=<total file pages> N0=<node 0 pages> N1=<node 1 pages> ... 584 anon=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... 585 unevictable=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... 586 hierarchical_<counter>=<counter pages> N0=<node 0 pages> N1=<node 1 pages> ... 587 588 The "total" count is sum of file + anon + unevictable. 589 590 6. Hierarchy support 591 592 The memory controller supports a deep hierarchy and hierarchical accounting. 593 The hierarchy is created by creating the appropriate cgroups in the 594 cgroup filesystem. Consider for example, the following cgroup filesystem 595 hierarchy 596 597 root 598 / | \ 599 / | \ 600 a b c 601 | \ 602 | \ 603 d e 604 605 In the diagram above, with hierarchical accounting enabled, all memory 606 usage of e, is accounted to its ancestors up until the root (i.e, c and root), 607 that has memory.use_hierarchy enabled. If one of the ancestors goes over its 608 limit, the reclaim algorithm reclaims from the tasks in the ancestor and the 609 children of the ancestor. 610 611 6.1 Enabling hierarchical accounting and reclaim 612 613 A memory cgroup by default disables the hierarchy feature. Support 614 can be enabled by writing 1 to memory.use_hierarchy file of the root cgroup 615 616 # echo 1 > memory.use_hierarchy 617 618 The feature can be disabled by 619 620 # echo 0 > memory.use_hierarchy 621 622 NOTE1: Enabling/disabling will fail if either the cgroup already has other 623 cgroups created below it, or if the parent cgroup has use_hierarchy 624 enabled. 625 626 NOTE2: When panic_on_oom is set to "2", the whole system will panic in 627 case of an OOM event in any cgroup. 628 629 7. Soft limits 630 631 Soft limits allow for greater sharing of memory. The idea behind soft limits 632 is to allow control groups to use as much of the memory as needed, provided 633 634 a. There is no memory contention 635 b. They do not exceed their hard limit 636 637 When the system detects memory contention or low memory, control groups 638 are pushed back to their soft limits. If the soft limit of each control 639 group is very high, they are pushed back as much as possible to make 640 sure that one control group does not starve the others of memory. 641 642 Please note that soft limits is a best-effort feature; it comes with 643 no guarantees, but it does its best to make sure that when memory is 644 heavily contended for, memory is allocated based on the soft limit 645 hints/setup. Currently soft limit based reclaim is set up such that 646 it gets invoked from balance_pgdat (kswapd). 647 648 7.1 Interface 649 650 Soft limits can be setup by using the following commands (in this example we 651 assume a soft limit of 256 MiB) 652 653 # echo 256M > memory.soft_limit_in_bytes 654 655 If we want to change this to 1G, we can at any time use 656 657 # echo 1G > memory.soft_limit_in_bytes 658 659 NOTE1: Soft limits take effect over a long period of time, since they involve 660 reclaiming memory for balancing between memory cgroups 661 NOTE2: It is recommended to set the soft limit always below the hard limit, 662 otherwise the hard limit will take precedence. 663 664 8. Move charges at task migration 665 666 Users can move charges associated with a task along with task migration, that 667 is, uncharge task's pages from the old cgroup and charge them to the new cgroup. 668 This feature is not supported in !CONFIG_MMU environments because of lack of 669 page tables. 670 671 8.1 Interface 672 673 This feature is disabled by default. It can be enabled (and disabled again) by 674 writing to memory.move_charge_at_immigrate of the destination cgroup. 675 676 If you want to enable it: 677 678 # echo (some positive value) > memory.move_charge_at_immigrate 679 680 Note: Each bits of move_charge_at_immigrate has its own meaning about what type 681 of charges should be moved. See 8.2 for details. 682 Note: Charges are moved only when you move mm->owner, in other words, 683 a leader of a thread group. 684 Note: If we cannot find enough space for the task in the destination cgroup, we 685 try to make space by reclaiming memory. Task migration may fail if we 686 cannot make enough space. 687 Note: It can take several seconds if you move charges much. 688 689 And if you want disable it again: 690 691 # echo 0 > memory.move_charge_at_immigrate 692 693 8.2 Type of charges which can be moved 694 695 Each bit in move_charge_at_immigrate has its own meaning about what type of 696 charges should be moved. But in any case, it must be noted that an account of 697 a page or a swap can be moved only when it is charged to the task's current 698 (old) memory cgroup. 699 700 bit | what type of charges would be moved ? 701 -----+------------------------------------------------------------------------ 702 0 | A charge of an anonymous page (or swap of it) used by the target task. 703 | You must enable Swap Extension (see 2.4) to enable move of swap charges. 704 -----+------------------------------------------------------------------------ 705 1 | A charge of file pages (normal file, tmpfs file (e.g. ipc shared memory) 706 | and swaps of tmpfs file) mmapped by the target task. Unlike the case of 707 | anonymous pages, file pages (and swaps) in the range mmapped by the task 708 | will be moved even if the task hasn't done page fault, i.e. they might 709 | not be the task's "RSS", but other task's "RSS" that maps the same file. 710 | And mapcount of the page is ignored (the page can be moved even if 711 | page_mapcount(page) > 1). You must enable Swap Extension (see 2.4) to 712 | enable move of swap charges. 713 714 8.3 TODO 715 716 - All of moving charge operations are done under cgroup_mutex. It's not good 717 behavior to hold the mutex too long, so we may need some trick. 718 719 9. Memory thresholds 720 721 Memory cgroup implements memory thresholds using the cgroups notification 722 API (see cgroups.txt). It allows to register multiple memory and memsw 723 thresholds and gets notifications when it crosses. 724 725 To register a threshold, an application must: 726 - create an eventfd using eventfd(2); 727 - open memory.usage_in_bytes or memory.memsw.usage_in_bytes; 728 - write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to 729 cgroup.event_control. 730 731 Application will be notified through eventfd when memory usage crosses 732 threshold in any direction. 733 734 It's applicable for root and non-root cgroup. 735 736 10. OOM Control 737 738 memory.oom_control file is for OOM notification and other controls. 739 740 Memory cgroup implements OOM notifier using the cgroup notification 741 API (See cgroups.txt). It allows to register multiple OOM notification 742 delivery and gets notification when OOM happens. 743 744 To register a notifier, an application must: 745 - create an eventfd using eventfd(2) 746 - open memory.oom_control file 747 - write string like "<event_fd> <fd of memory.oom_control>" to 748 cgroup.event_control 749 750 The application will be notified through eventfd when OOM happens. 751 OOM notification doesn't work for the root cgroup. 752 753 You can disable the OOM-killer by writing "1" to memory.oom_control file, as: 754 755 #echo 1 > memory.oom_control 756 757 If OOM-killer is disabled, tasks under cgroup will hang/sleep 758 in memory cgroup's OOM-waitqueue when they request accountable memory. 759 760 For running them, you have to relax the memory cgroup's OOM status by 761 * enlarge limit or reduce usage. 762 To reduce usage, 763 * kill some tasks. 764 * move some tasks to other group with account migration. 765 * remove some files (on tmpfs?) 766 767 Then, stopped tasks will work again. 768 769 At reading, current status of OOM is shown. 770 oom_kill_disable 0 or 1 (if 1, oom-killer is disabled) 771 under_oom 0 or 1 (if 1, the memory cgroup is under OOM, tasks may 772 be stopped.) 773 774 11. Memory Pressure 775 776 The pressure level notifications can be used to monitor the memory 777 allocation cost; based on the pressure, applications can implement 778 different strategies of managing their memory resources. The pressure 779 levels are defined as following: 780 781 The "low" level means that the system is reclaiming memory for new 782 allocations. Monitoring this reclaiming activity might be useful for 783 maintaining cache level. Upon notification, the program (typically 784 "Activity Manager") might analyze vmstat and act in advance (i.e. 785 prematurely shutdown unimportant services). 786 787 The "medium" level means that the system is experiencing medium memory 788 pressure, the system might be making swap, paging out active file caches, 789 etc. Upon this event applications may decide to further analyze 790 vmstat/zoneinfo/memcg or internal memory usage statistics and free any 791 resources that can be easily reconstructed or re-read from a disk. 792 793 The "critical" level means that the system is actively thrashing, it is 794 about to out of memory (OOM) or even the in-kernel OOM killer is on its 795 way to trigger. Applications should do whatever they can to help the 796 system. It might be too late to consult with vmstat or any other 797 statistics, so it's advisable to take an immediate action. 798 799 The events are propagated upward until the event is handled, i.e. the 800 events are not pass-through. Here is what this means: for example you have 801 three cgroups: A->B->C. Now you set up an event listener on cgroups A, B 802 and C, and suppose group C experiences some pressure. In this situation, 803 only group C will receive the notification, i.e. groups A and B will not 804 receive it. This is done to avoid excessive "broadcasting" of messages, 805 which disturbs the system and which is especially bad if we are low on 806 memory or thrashing. So, organize the cgroups wisely, or propagate the 807 events manually (or, ask us to implement the pass-through events, 808 explaining why would you need them.) 809 810 The file memory.pressure_level is only used to setup an eventfd. To 811 register a notification, an application must: 812 813 - create an eventfd using eventfd(2); 814 - open memory.pressure_level; 815 - write string like "<event_fd> <fd of memory.pressure_level> <level>" 816 to cgroup.event_control. 817 818 Application will be notified through eventfd when memory pressure is at 819 the specific level (or higher). Read/write operations to 820 memory.pressure_level are no implemented. 821 822 Test: 823 824 Here is a small script example that makes a new cgroup, sets up a 825 memory limit, sets up a notification in the cgroup and then makes child 826 cgroup experience a critical pressure: 827 828 # cd /sys/fs/cgroup/memory/ 829 # mkdir foo 830 # cd foo 831 # cgroup_event_listener memory.pressure_level low & 832 # echo 8000000 > memory.limit_in_bytes 833 # echo 8000000 > memory.memsw.limit_in_bytes 834 # echo $$ > tasks 835 # dd if=/dev/zero | read x 836 837 (Expect a bunch of notifications, and eventually, the oom-killer will 838 trigger.) 839 840 12. TODO 841 842 1. Make per-cgroup scanner reclaim not-shared pages first 843 2. Teach controller to account for shared-pages 844 3. Start reclamation in the background when the limit is 845 not yet hit but the usage is getting closer 846 847 Summary 848 849 Overall, the memory controller has been a stable controller and has been 850 commented and discussed quite extensively in the community. 851 852 References 853 854 1. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/ 855 2. Singh, Balbir. Memory Controller (RSS Control), 856 http://lwn.net/Articles/222762/ 857 3. Emelianov, Pavel. Resource controllers based on process cgroups 858 http://lkml.org/lkml/2007/3/6/198 859 4. Emelianov, Pavel. RSS controller based on process cgroups (v2) 860 http://lkml.org/lkml/2007/4/9/78 861 5. Emelianov, Pavel. RSS controller based on process cgroups (v3) 862 http://lkml.org/lkml/2007/5/30/244 863 6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/ 864 7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control 865 subsystem (v3), http://lwn.net/Articles/235534/ 866 8. Singh, Balbir. RSS controller v2 test results (lmbench), 867 http://lkml.org/lkml/2007/5/17/232 868 9. Singh, Balbir. RSS controller v2 AIM9 results 869 http://lkml.org/lkml/2007/5/18/1 870 10. Singh, Balbir. Memory controller v6 test results, 871 http://lkml.org/lkml/2007/8/19/36 872 11. Singh, Balbir. Memory controller introduction (v6), 873 http://lkml.org/lkml/2007/8/17/69 874 12. Corbet, Jonathan, Controlling memory use in cgroups, 875 http://lwn.net/Articles/243795/