Based on kernel version 4.9. Page generated on 2016-12-21 14:37 EST.
1 Documentation for /proc/sys/vm/* kernel version 2.6.29 2 (c) 1998, 1999, Rik van Riel <firstname.lastname@example.org> 3 (c) 2008 Peter W. Morreale <email@example.com> 4 5 For general info and legal blurb, please look in README. 6 7 ============================================================== 8 9 This file contains the documentation for the sysctl files in 10 /proc/sys/vm and is valid for Linux kernel version 2.6.29. 11 12 The files in this directory can be used to tune the operation 13 of the virtual memory (VM) subsystem of the Linux kernel and 14 the writeout of dirty data to disk. 15 16 Default values and initialization routines for most of these 17 files can be found in mm/swap.c. 18 19 Currently, these files are in /proc/sys/vm: 20 21 - admin_reserve_kbytes 22 - block_dump 23 - compact_memory 24 - compact_unevictable_allowed 25 - dirty_background_bytes 26 - dirty_background_ratio 27 - dirty_bytes 28 - dirty_expire_centisecs 29 - dirty_ratio 30 - dirty_writeback_centisecs 31 - drop_caches 32 - extfrag_threshold 33 - hugepages_treat_as_movable 34 - hugetlb_shm_group 35 - laptop_mode 36 - legacy_va_layout 37 - lowmem_reserve_ratio 38 - max_map_count 39 - memory_failure_early_kill 40 - memory_failure_recovery 41 - min_free_kbytes 42 - min_slab_ratio 43 - min_unmapped_ratio 44 - mmap_min_addr 45 - mmap_rnd_bits 46 - mmap_rnd_compat_bits 47 - nr_hugepages 48 - nr_overcommit_hugepages 49 - nr_trim_pages (only if CONFIG_MMU=n) 50 - numa_zonelist_order 51 - oom_dump_tasks 52 - oom_kill_allocating_task 53 - overcommit_kbytes 54 - overcommit_memory 55 - overcommit_ratio 56 - page-cluster 57 - panic_on_oom 58 - percpu_pagelist_fraction 59 - stat_interval 60 - stat_refresh 61 - swappiness 62 - user_reserve_kbytes 63 - vfs_cache_pressure 64 - watermark_scale_factor 65 - zone_reclaim_mode 66 67 ============================================================== 68 69 admin_reserve_kbytes 70 71 The amount of free memory in the system that should be reserved for users 72 with the capability cap_sys_admin. 73 74 admin_reserve_kbytes defaults to min(3% of free pages, 8MB) 75 76 That should provide enough for the admin to log in and kill a process, 77 if necessary, under the default overcommit 'guess' mode. 78 79 Systems running under overcommit 'never' should increase this to account 80 for the full Virtual Memory Size of programs used to recover. Otherwise, 81 root may not be able to log in to recover the system. 82 83 How do you calculate a minimum useful reserve? 84 85 sshd or login + bash (or some other shell) + top (or ps, kill, etc.) 86 87 For overcommit 'guess', we can sum resident set sizes (RSS). 88 On x86_64 this is about 8MB. 89 90 For overcommit 'never', we can take the max of their virtual sizes (VSZ) 91 and add the sum of their RSS. 92 On x86_64 this is about 128MB. 93 94 Changing this takes effect whenever an application requests memory. 95 96 ============================================================== 97 98 block_dump 99 100 block_dump enables block I/O debugging when set to a nonzero value. More 101 information on block I/O debugging is in Documentation/laptops/laptop-mode.txt. 102 103 ============================================================== 104 105 compact_memory 106 107 Available only when CONFIG_COMPACTION is set. When 1 is written to the file, 108 all zones are compacted such that free memory is available in contiguous 109 blocks where possible. This can be important for example in the allocation of 110 huge pages although processes will also directly compact memory as required. 111 112 ============================================================== 113 114 compact_unevictable_allowed 115 116 Available only when CONFIG_COMPACTION is set. When set to 1, compaction is 117 allowed to examine the unevictable lru (mlocked pages) for pages to compact. 118 This should be used on systems where stalls for minor page faults are an 119 acceptable trade for large contiguous free memory. Set to 0 to prevent 120 compaction from moving pages that are unevictable. Default value is 1. 121 122 ============================================================== 123 124 dirty_background_bytes 125 126 Contains the amount of dirty memory at which the background kernel 127 flusher threads will start writeback. 128 129 Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only 130 one of them may be specified at a time. When one sysctl is written it is 131 immediately taken into account to evaluate the dirty memory limits and the 132 other appears as 0 when read. 133 134 ============================================================== 135 136 dirty_background_ratio 137 138 Contains, as a percentage of total available memory that contains free pages 139 and reclaimable pages, the number of pages at which the background kernel 140 flusher threads will start writing out dirty data. 141 142 The total available memory is not equal to total system memory. 143 144 ============================================================== 145 146 dirty_bytes 147 148 Contains the amount of dirty memory at which a process generating disk writes 149 will itself start writeback. 150 151 Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be 152 specified at a time. When one sysctl is written it is immediately taken into 153 account to evaluate the dirty memory limits and the other appears as 0 when 154 read. 155 156 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any 157 value lower than this limit will be ignored and the old configuration will be 158 retained. 159 160 ============================================================== 161 162 dirty_expire_centisecs 163 164 This tunable is used to define when dirty data is old enough to be eligible 165 for writeout by the kernel flusher threads. It is expressed in 100'ths 166 of a second. Data which has been dirty in-memory for longer than this 167 interval will be written out next time a flusher thread wakes up. 168 169 ============================================================== 170 171 dirty_ratio 172 173 Contains, as a percentage of total available memory that contains free pages 174 and reclaimable pages, the number of pages at which a process which is 175 generating disk writes will itself start writing out dirty data. 176 177 The total available memory is not equal to total system memory. 178 179 ============================================================== 180 181 dirty_writeback_centisecs 182 183 The kernel flusher threads will periodically wake up and write `old' data 184 out to disk. This tunable expresses the interval between those wakeups, in 185 100'ths of a second. 186 187 Setting this to zero disables periodic writeback altogether. 188 189 ============================================================== 190 191 drop_caches 192 193 Writing to this will cause the kernel to drop clean caches, as well as 194 reclaimable slab objects like dentries and inodes. Once dropped, their 195 memory becomes free. 196 197 To free pagecache: 198 echo 1 > /proc/sys/vm/drop_caches 199 To free reclaimable slab objects (includes dentries and inodes): 200 echo 2 > /proc/sys/vm/drop_caches 201 To free slab objects and pagecache: 202 echo 3 > /proc/sys/vm/drop_caches 203 204 This is a non-destructive operation and will not free any dirty objects. 205 To increase the number of objects freed by this operation, the user may run 206 `sync' prior to writing to /proc/sys/vm/drop_caches. This will minimize the 207 number of dirty objects on the system and create more candidates to be 208 dropped. 209 210 This file is not a means to control the growth of the various kernel caches 211 (inodes, dentries, pagecache, etc...) These objects are automatically 212 reclaimed by the kernel when memory is needed elsewhere on the system. 213 214 Use of this file can cause performance problems. Since it discards cached 215 objects, it may cost a significant amount of I/O and CPU to recreate the 216 dropped objects, especially if they were under heavy use. Because of this, 217 use outside of a testing or debugging environment is not recommended. 218 219 You may see informational messages in your kernel log when this file is 220 used: 221 222 cat (1234): drop_caches: 3 223 224 These are informational only. They do not mean that anything is wrong 225 with your system. To disable them, echo 4 (bit 3) into drop_caches. 226 227 ============================================================== 228 229 extfrag_threshold 230 231 This parameter affects whether the kernel will compact memory or direct 232 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in 233 debugfs shows what the fragmentation index for each order is in each zone in 234 the system. Values tending towards 0 imply allocations would fail due to lack 235 of memory, values towards 1000 imply failures are due to fragmentation and -1 236 implies that the allocation will succeed as long as watermarks are met. 237 238 The kernel will not compact memory in a zone if the 239 fragmentation index is <= extfrag_threshold. The default value is 500. 240 241 ============================================================== 242 243 hugepages_treat_as_movable 244 245 This parameter controls whether we can allocate hugepages from ZONE_MOVABLE 246 or not. If set to non-zero, hugepages can be allocated from ZONE_MOVABLE. 247 ZONE_MOVABLE is created when kernel boot parameter kernelcore= is specified, 248 so this parameter has no effect if used without kernelcore=. 249 250 Hugepage migration is now available in some situations which depend on the 251 architecture and/or the hugepage size. If a hugepage supports migration, 252 allocation from ZONE_MOVABLE is always enabled for the hugepage regardless 253 of the value of this parameter. 254 IOW, this parameter affects only non-migratable hugepages. 255 256 Assuming that hugepages are not migratable in your system, one usecase of 257 this parameter is that users can make hugepage pool more extensible by 258 enabling the allocation from ZONE_MOVABLE. This is because on ZONE_MOVABLE 259 page reclaim/migration/compaction work more and you can get contiguous 260 memory more likely. Note that using ZONE_MOVABLE for non-migratable 261 hugepages can do harm to other features like memory hotremove (because 262 memory hotremove expects that memory blocks on ZONE_MOVABLE are always 263 removable,) so it's a trade-off responsible for the users. 264 265 ============================================================== 266 267 hugetlb_shm_group 268 269 hugetlb_shm_group contains group id that is allowed to create SysV 270 shared memory segment using hugetlb page. 271 272 ============================================================== 273 274 laptop_mode 275 276 laptop_mode is a knob that controls "laptop mode". All the things that are 277 controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt. 278 279 ============================================================== 280 281 legacy_va_layout 282 283 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel 284 will use the legacy (2.4) layout for all processes. 285 286 ============================================================== 287 288 lowmem_reserve_ratio 289 290 For some specialised workloads on highmem machines it is dangerous for 291 the kernel to allow process memory to be allocated from the "lowmem" 292 zone. This is because that memory could then be pinned via the mlock() 293 system call, or by unavailability of swapspace. 294 295 And on large highmem machines this lack of reclaimable lowmem memory 296 can be fatal. 297 298 So the Linux page allocator has a mechanism which prevents allocations 299 which _could_ use highmem from using too much lowmem. This means that 300 a certain amount of lowmem is defended from the possibility of being 301 captured into pinned user memory. 302 303 (The same argument applies to the old 16 megabyte ISA DMA region. This 304 mechanism will also defend that region from allocations which could use 305 highmem or lowmem). 306 307 The `lowmem_reserve_ratio' tunable determines how aggressive the kernel is 308 in defending these lower zones. 309 310 If you have a machine which uses highmem or ISA DMA and your 311 applications are using mlock(), or if you are running with no swap then 312 you probably should change the lowmem_reserve_ratio setting. 313 314 The lowmem_reserve_ratio is an array. You can see them by reading this file. 315 - 316 % cat /proc/sys/vm/lowmem_reserve_ratio 317 256 256 32 318 - 319 Note: # of this elements is one fewer than number of zones. Because the highest 320 zone's value is not necessary for following calculation. 321 322 But, these values are not used directly. The kernel calculates # of protection 323 pages for each zones from them. These are shown as array of protection pages 324 in /proc/zoneinfo like followings. (This is an example of x86-64 box). 325 Each zone has an array of protection pages like this. 326 327 - 328 Node 0, zone DMA 329 pages free 1355 330 min 3 331 low 3 332 high 4 333 : 334 : 335 numa_other 0 336 protection: (0, 2004, 2004, 2004) 337 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 338 pagesets 339 cpu: 0 pcp: 0 340 : 341 - 342 These protections are added to score to judge whether this zone should be used 343 for page allocation or should be reclaimed. 344 345 In this example, if normal pages (index=2) are required to this DMA zone and 346 watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should 347 not be used because pages_free(1355) is smaller than watermark + protection 348 (4 + 2004 = 2008). If this protection value is 0, this zone would be used for 349 normal page requirement. If requirement is DMA zone(index=0), protection 350 (=0) is used. 351 352 zone[i]'s protection[j] is calculated by following expression. 353 354 (i < j): 355 zone[i]->protection[j] 356 = (total sums of managed_pages from zone[i+1] to zone[j] on the node) 357 / lowmem_reserve_ratio[i]; 358 (i = j): 359 (should not be protected. = 0; 360 (i > j): 361 (not necessary, but looks 0) 362 363 The default values of lowmem_reserve_ratio[i] are 364 256 (if zone[i] means DMA or DMA32 zone) 365 32 (others). 366 As above expression, they are reciprocal number of ratio. 367 256 means 1/256. # of protection pages becomes about "0.39%" of total managed 368 pages of higher zones on the node. 369 370 If you would like to protect more pages, smaller values are effective. 371 The minimum value is 1 (1/1 -> 100%). 372 373 ============================================================== 374 375 max_map_count: 376 377 This file contains the maximum number of memory map areas a process 378 may have. Memory map areas are used as a side-effect of calling 379 malloc, directly by mmap and mprotect, and also when loading shared 380 libraries. 381 382 While most applications need less than a thousand maps, certain 383 programs, particularly malloc debuggers, may consume lots of them, 384 e.g., up to one or two maps per allocation. 385 386 The default value is 65536. 387 388 ============================================================= 389 390 memory_failure_early_kill: 391 392 Control how to kill processes when uncorrected memory error (typically 393 a 2bit error in a memory module) is detected in the background by hardware 394 that cannot be handled by the kernel. In some cases (like the page 395 still having a valid copy on disk) the kernel will handle the failure 396 transparently without affecting any applications. But if there is 397 no other uptodate copy of the data it will kill to prevent any data 398 corruptions from propagating. 399 400 1: Kill all processes that have the corrupted and not reloadable page mapped 401 as soon as the corruption is detected. Note this is not supported 402 for a few types of pages, like kernel internally allocated data or 403 the swap cache, but works for the majority of user pages. 404 405 0: Only unmap the corrupted page from all processes and only kill a process 406 who tries to access it. 407 408 The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can 409 handle this if they want to. 410 411 This is only active on architectures/platforms with advanced machine 412 check handling and depends on the hardware capabilities. 413 414 Applications can override this setting individually with the PR_MCE_KILL prctl 415 416 ============================================================== 417 418 memory_failure_recovery 419 420 Enable memory failure recovery (when supported by the platform) 421 422 1: Attempt recovery. 423 424 0: Always panic on a memory failure. 425 426 ============================================================== 427 428 min_free_kbytes: 429 430 This is used to force the Linux VM to keep a minimum number 431 of kilobytes free. The VM uses this number to compute a 432 watermark[WMARK_MIN] value for each lowmem zone in the system. 433 Each lowmem zone gets a number of reserved free pages based 434 proportionally on its size. 435 436 Some minimal amount of memory is needed to satisfy PF_MEMALLOC 437 allocations; if you set this to lower than 1024KB, your system will 438 become subtly broken, and prone to deadlock under high loads. 439 440 Setting this too high will OOM your machine instantly. 441 442 ============================================================= 443 444 min_slab_ratio: 445 446 This is available only on NUMA kernels. 447 448 A percentage of the total pages in each zone. On Zone reclaim 449 (fallback from the local zone occurs) slabs will be reclaimed if more 450 than this percentage of pages in a zone are reclaimable slab pages. 451 This insures that the slab growth stays under control even in NUMA 452 systems that rarely perform global reclaim. 453 454 The default is 5 percent. 455 456 Note that slab reclaim is triggered in a per zone / node fashion. 457 The process of reclaiming slab memory is currently not node specific 458 and may not be fast. 459 460 ============================================================= 461 462 min_unmapped_ratio: 463 464 This is available only on NUMA kernels. 465 466 This is a percentage of the total pages in each zone. Zone reclaim will 467 only occur if more than this percentage of pages are in a state that 468 zone_reclaim_mode allows to be reclaimed. 469 470 If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared 471 against all file-backed unmapped pages including swapcache pages and tmpfs 472 files. Otherwise, only unmapped pages backed by normal files but not tmpfs 473 files and similar are considered. 474 475 The default is 1 percent. 476 477 ============================================================== 478 479 mmap_min_addr 480 481 This file indicates the amount of address space which a user process will 482 be restricted from mmapping. Since kernel null dereference bugs could 483 accidentally operate based on the information in the first couple of pages 484 of memory userspace processes should not be allowed to write to them. By 485 default this value is set to 0 and no protections will be enforced by the 486 security module. Setting this value to something like 64k will allow the 487 vast majority of applications to work correctly and provide defense in depth 488 against future potential kernel bugs. 489 490 ============================================================== 491 492 mmap_rnd_bits: 493 494 This value can be used to select the number of bits to use to 495 determine the random offset to the base address of vma regions 496 resulting from mmap allocations on architectures which support 497 tuning address space randomization. This value will be bounded 498 by the architecture's minimum and maximum supported values. 499 500 This value can be changed after boot using the 501 /proc/sys/vm/mmap_rnd_bits tunable 502 503 ============================================================== 504 505 mmap_rnd_compat_bits: 506 507 This value can be used to select the number of bits to use to 508 determine the random offset to the base address of vma regions 509 resulting from mmap allocations for applications run in 510 compatibility mode on architectures which support tuning address 511 space randomization. This value will be bounded by the 512 architecture's minimum and maximum supported values. 513 514 This value can be changed after boot using the 515 /proc/sys/vm/mmap_rnd_compat_bits tunable 516 517 ============================================================== 518 519 nr_hugepages 520 521 Change the minimum size of the hugepage pool. 522 523 See Documentation/vm/hugetlbpage.txt 524 525 ============================================================== 526 527 nr_overcommit_hugepages 528 529 Change the maximum size of the hugepage pool. The maximum is 530 nr_hugepages + nr_overcommit_hugepages. 531 532 See Documentation/vm/hugetlbpage.txt 533 534 ============================================================== 535 536 nr_trim_pages 537 538 This is available only on NOMMU kernels. 539 540 This value adjusts the excess page trimming behaviour of power-of-2 aligned 541 NOMMU mmap allocations. 542 543 A value of 0 disables trimming of allocations entirely, while a value of 1 544 trims excess pages aggressively. Any value >= 1 acts as the watermark where 545 trimming of allocations is initiated. 546 547 The default value is 1. 548 549 See Documentation/nommu-mmap.txt for more information. 550 551 ============================================================== 552 553 numa_zonelist_order 554 555 This sysctl is only for NUMA. 556 'where the memory is allocated from' is controlled by zonelists. 557 (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation. 558 you may be able to read ZONE_DMA as ZONE_DMA32...) 559 560 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following. 561 ZONE_NORMAL -> ZONE_DMA 562 This means that a memory allocation request for GFP_KERNEL will 563 get memory from ZONE_DMA only when ZONE_NORMAL is not available. 564 565 In NUMA case, you can think of following 2 types of order. 566 Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL 567 568 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL 569 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. 570 571 Type(A) offers the best locality for processes on Node(0), but ZONE_DMA 572 will be used before ZONE_NORMAL exhaustion. This increases possibility of 573 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small. 574 575 Type(B) cannot offer the best locality but is more robust against OOM of 576 the DMA zone. 577 578 Type(A) is called as "Node" order. Type (B) is "Zone" order. 579 580 "Node order" orders the zonelists by node, then by zone within each node. 581 Specify "[Nn]ode" for node order 582 583 "Zone Order" orders the zonelists by zone type, then by node within each 584 zone. Specify "[Zz]one" for zone order. 585 586 Specify "[Dd]efault" to request automatic configuration. 587 588 On 32-bit, the Normal zone needs to be preserved for allocations accessible 589 by the kernel, so "zone" order will be selected. 590 591 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node" 592 order will be selected. 593 594 Default order is recommended unless this is causing problems for your 595 system/application. 596 597 ============================================================== 598 599 oom_dump_tasks 600 601 Enables a system-wide task dump (excluding kernel threads) to be produced 602 when the kernel performs an OOM-killing and includes such information as 603 pid, uid, tgid, vm size, rss, nr_ptes, nr_pmds, swapents, oom_score_adj 604 score, and name. This is helpful to determine why the OOM killer was 605 invoked, to identify the rogue task that caused it, and to determine why 606 the OOM killer chose the task it did to kill. 607 608 If this is set to zero, this information is suppressed. On very 609 large systems with thousands of tasks it may not be feasible to dump 610 the memory state information for each one. Such systems should not 611 be forced to incur a performance penalty in OOM conditions when the 612 information may not be desired. 613 614 If this is set to non-zero, this information is shown whenever the 615 OOM killer actually kills a memory-hogging task. 616 617 The default value is 1 (enabled). 618 619 ============================================================== 620 621 oom_kill_allocating_task 622 623 This enables or disables killing the OOM-triggering task in 624 out-of-memory situations. 625 626 If this is set to zero, the OOM killer will scan through the entire 627 tasklist and select a task based on heuristics to kill. This normally 628 selects a rogue memory-hogging task that frees up a large amount of 629 memory when killed. 630 631 If this is set to non-zero, the OOM killer simply kills the task that 632 triggered the out-of-memory condition. This avoids the expensive 633 tasklist scan. 634 635 If panic_on_oom is selected, it takes precedence over whatever value 636 is used in oom_kill_allocating_task. 637 638 The default value is 0. 639 640 ============================================================== 641 642 overcommit_kbytes: 643 644 When overcommit_memory is set to 2, the committed address space is not 645 permitted to exceed swap plus this amount of physical RAM. See below. 646 647 Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one 648 of them may be specified at a time. Setting one disables the other (which 649 then appears as 0 when read). 650 651 ============================================================== 652 653 overcommit_memory: 654 655 This value contains a flag that enables memory overcommitment. 656 657 When this flag is 0, the kernel attempts to estimate the amount 658 of free memory left when userspace requests more memory. 659 660 When this flag is 1, the kernel pretends there is always enough 661 memory until it actually runs out. 662 663 When this flag is 2, the kernel uses a "never overcommit" 664 policy that attempts to prevent any overcommit of memory. 665 Note that user_reserve_kbytes affects this policy. 666 667 This feature can be very useful because there are a lot of 668 programs that malloc() huge amounts of memory "just-in-case" 669 and don't use much of it. 670 671 The default value is 0. 672 673 See Documentation/vm/overcommit-accounting and 674 mm/mmap.c::__vm_enough_memory() for more information. 675 676 ============================================================== 677 678 overcommit_ratio: 679 680 When overcommit_memory is set to 2, the committed address 681 space is not permitted to exceed swap plus this percentage 682 of physical RAM. See above. 683 684 ============================================================== 685 686 page-cluster 687 688 page-cluster controls the number of pages up to which consecutive pages 689 are read in from swap in a single attempt. This is the swap counterpart 690 to page cache readahead. 691 The mentioned consecutivity is not in terms of virtual/physical addresses, 692 but consecutive on swap space - that means they were swapped out together. 693 694 It is a logarithmic value - setting it to zero means "1 page", setting 695 it to 1 means "2 pages", setting it to 2 means "4 pages", etc. 696 Zero disables swap readahead completely. 697 698 The default value is three (eight pages at a time). There may be some 699 small benefits in tuning this to a different value if your workload is 700 swap-intensive. 701 702 Lower values mean lower latencies for initial faults, but at the same time 703 extra faults and I/O delays for following faults if they would have been part of 704 that consecutive pages readahead would have brought in. 705 706 ============================================================= 707 708 panic_on_oom 709 710 This enables or disables panic on out-of-memory feature. 711 712 If this is set to 0, the kernel will kill some rogue process, 713 called oom_killer. Usually, oom_killer can kill rogue processes and 714 system will survive. 715 716 If this is set to 1, the kernel panics when out-of-memory happens. 717 However, if a process limits using nodes by mempolicy/cpusets, 718 and those nodes become memory exhaustion status, one process 719 may be killed by oom-killer. No panic occurs in this case. 720 Because other nodes' memory may be free. This means system total status 721 may be not fatal yet. 722 723 If this is set to 2, the kernel panics compulsorily even on the 724 above-mentioned. Even oom happens under memory cgroup, the whole 725 system panics. 726 727 The default value is 0. 728 1 and 2 are for failover of clustering. Please select either 729 according to your policy of failover. 730 panic_on_oom=2+kdump gives you very strong tool to investigate 731 why oom happens. You can get snapshot. 732 733 ============================================================= 734 735 percpu_pagelist_fraction 736 737 This is the fraction of pages at most (high mark pcp->high) in each zone that 738 are allocated for each per cpu page list. The min value for this is 8. It 739 means that we don't allow more than 1/8th of pages in each zone to be 740 allocated in any single per_cpu_pagelist. This entry only changes the value 741 of hot per cpu pagelists. User can specify a number like 100 to allocate 742 1/100th of each zone to each per cpu page list. 743 744 The batch value of each per cpu pagelist is also updated as a result. It is 745 set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8) 746 747 The initial value is zero. Kernel does not use this value at boot time to set 748 the high water marks for each per cpu page list. If the user writes '0' to this 749 sysctl, it will revert to this default behavior. 750 751 ============================================================== 752 753 stat_interval 754 755 The time interval between which vm statistics are updated. The default 756 is 1 second. 757 758 ============================================================== 759 760 stat_refresh 761 762 Any read or write (by root only) flushes all the per-cpu vm statistics 763 into their global totals, for more accurate reports when testing 764 e.g. cat /proc/sys/vm/stat_refresh /proc/meminfo 765 766 As a side-effect, it also checks for negative totals (elsewhere reported 767 as 0) and "fails" with EINVAL if any are found, with a warning in dmesg. 768 (At time of writing, a few stats are known sometimes to be found negative, 769 with no ill effects: errors and warnings on these stats are suppressed.) 770 771 ============================================================== 772 773 swappiness 774 775 This control is used to define how aggressive the kernel will swap 776 memory pages. Higher values will increase agressiveness, lower values 777 decrease the amount of swap. A value of 0 instructs the kernel not to 778 initiate swap until the amount of free and file-backed pages is less 779 than the high water mark in a zone. 780 781 The default value is 60. 782 783 ============================================================== 784 785 - user_reserve_kbytes 786 787 When overcommit_memory is set to 2, "never overcommit" mode, reserve 788 min(3% of current process size, user_reserve_kbytes) of free memory. 789 This is intended to prevent a user from starting a single memory hogging 790 process, such that they cannot recover (kill the hog). 791 792 user_reserve_kbytes defaults to min(3% of the current process size, 128MB). 793 794 If this is reduced to zero, then the user will be allowed to allocate 795 all free memory with a single process, minus admin_reserve_kbytes. 796 Any subsequent attempts to execute a command will result in 797 "fork: Cannot allocate memory". 798 799 Changing this takes effect whenever an application requests memory. 800 801 ============================================================== 802 803 vfs_cache_pressure 804 ------------------ 805 806 This percentage value controls the tendency of the kernel to reclaim 807 the memory which is used for caching of directory and inode objects. 808 809 At the default value of vfs_cache_pressure=100 the kernel will attempt to 810 reclaim dentries and inodes at a "fair" rate with respect to pagecache and 811 swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer 812 to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will 813 never reclaim dentries and inodes due to memory pressure and this can easily 814 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 815 causes the kernel to prefer to reclaim dentries and inodes. 816 817 Increasing vfs_cache_pressure significantly beyond 100 may have negative 818 performance impact. Reclaim code needs to take various locks to find freeable 819 directory and inode objects. With vfs_cache_pressure=1000, it will look for 820 ten times more freeable objects than there are. 821 822 ============================================================= 823 824 watermark_scale_factor: 825 826 This factor controls the aggressiveness of kswapd. It defines the 827 amount of memory left in a node/system before kswapd is woken up and 828 how much memory needs to be free before kswapd goes back to sleep. 829 830 The unit is in fractions of 10,000. The default value of 10 means the 831 distances between watermarks are 0.1% of the available memory in the 832 node/system. The maximum value is 1000, or 10% of memory. 833 834 A high rate of threads entering direct reclaim (allocstall) or kswapd 835 going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate 836 that the number of free pages kswapd maintains for latency reasons is 837 too small for the allocation bursts occurring in the system. This knob 838 can then be used to tune kswapd aggressiveness accordingly. 839 840 ============================================================== 841 842 zone_reclaim_mode: 843 844 Zone_reclaim_mode allows someone to set more or less aggressive approaches to 845 reclaim memory when a zone runs out of memory. If it is set to zero then no 846 zone reclaim occurs. Allocations will be satisfied from other zones / nodes 847 in the system. 848 849 This is value ORed together of 850 851 1 = Zone reclaim on 852 2 = Zone reclaim writes dirty pages out 853 4 = Zone reclaim swaps pages 854 855 zone_reclaim_mode is disabled by default. For file servers or workloads 856 that benefit from having their data cached, zone_reclaim_mode should be 857 left disabled as the caching effect is likely to be more important than 858 data locality. 859 860 zone_reclaim may be enabled if it's known that the workload is partitioned 861 such that each partition fits within a NUMA node and that accessing remote 862 memory would cause a measurable performance reduction. The page allocator 863 will then reclaim easily reusable pages (those page cache pages that are 864 currently not used) before allocating off node pages. 865 866 Allowing zone reclaim to write out pages stops processes that are 867 writing large amounts of data from dirtying pages on other nodes. Zone 868 reclaim will write out dirty pages if a zone fills up and so effectively 869 throttle the process. This may decrease the performance of a single process 870 since it cannot use all of system memory to buffer the outgoing writes 871 anymore but it preserve the memory on other nodes so that the performance 872 of other processes running on other nodes will not be affected. 873 874 Allowing regular swap effectively restricts allocations to the local 875 node unless explicitly overridden by memory policies or cpuset 876 configurations. 877 878 ============ End of Document =================================