Based on kernel version 4.16.1. Page generated on 2018-04-09 11:53 EST.
1 2 The intent of this file is to give a brief summary of hugetlbpage support in 3 the Linux kernel. This support is built on top of multiple page size support 4 that is provided by most modern architectures. For example, x86 CPUs normally 5 support 4K and 2M (1G if architecturally supported) page sizes, ia64 6 architecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M, 7 256M and ppc64 supports 4K and 16M. A TLB is a cache of virtual-to-physical 8 translations. Typically this is a very scarce resource on processor. 9 Operating systems try to make best use of limited number of TLB resources. 10 This optimization is more critical now as bigger and bigger physical memories 11 (several GBs) are more readily available. 12 13 Users can use the huge page support in Linux kernel by either using the mmap 14 system call or standard SYSV shared memory system calls (shmget, shmat). 15 16 First the Linux kernel needs to be built with the CONFIG_HUGETLBFS 17 (present under "File systems") and CONFIG_HUGETLB_PAGE (selected 18 automatically when CONFIG_HUGETLBFS is selected) configuration 19 options. 20 21 The /proc/meminfo file provides information about the total number of 22 persistent hugetlb pages in the kernel's huge page pool. It also displays 23 default huge page size and information about the number of free, reserved 24 and surplus huge pages in the pool of huge pages of default size. 25 The huge page size is needed for generating the proper alignment and 26 size of the arguments to system calls that map huge page regions. 27 28 The output of "cat /proc/meminfo" will include lines like: 29 30 ..... 31 HugePages_Total: uuu 32 HugePages_Free: vvv 33 HugePages_Rsvd: www 34 HugePages_Surp: xxx 35 Hugepagesize: yyy kB 36 Hugetlb: zzz kB 37 38 where: 39 HugePages_Total is the size of the pool of huge pages. 40 HugePages_Free is the number of huge pages in the pool that are not yet 41 allocated. 42 HugePages_Rsvd is short for "reserved," and is the number of huge pages for 43 which a commitment to allocate from the pool has been made, 44 but no allocation has yet been made. Reserved huge pages 45 guarantee that an application will be able to allocate a 46 huge page from the pool of huge pages at fault time. 47 HugePages_Surp is short for "surplus," and is the number of huge pages in 48 the pool above the value in /proc/sys/vm/nr_hugepages. The 49 maximum number of surplus huge pages is controlled by 50 /proc/sys/vm/nr_overcommit_hugepages. 51 Hugepagesize is the default hugepage size (in Kb). 52 Hugetlb is the total amount of memory (in kB), consumed by huge 53 pages of all sizes. 54 If huge pages of different sizes are in use, this number 55 will exceed HugePages_Total * Hugepagesize. To get more 56 detailed information, please, refer to 57 /sys/kernel/mm/hugepages (described below). 58 59 60 /proc/filesystems should also show a filesystem of type "hugetlbfs" configured 61 in the kernel. 62 63 /proc/sys/vm/nr_hugepages indicates the current number of "persistent" huge 64 pages in the kernel's huge page pool. "Persistent" huge pages will be 65 returned to the huge page pool when freed by a task. A user with root 66 privileges can dynamically allocate more or free some persistent huge pages 67 by increasing or decreasing the value of 'nr_hugepages'. 68 69 Pages that are used as huge pages are reserved inside the kernel and cannot 70 be used for other purposes. Huge pages cannot be swapped out under 71 memory pressure. 72 73 Once a number of huge pages have been pre-allocated to the kernel huge page 74 pool, a user with appropriate privilege can use either the mmap system call 75 or shared memory system calls to use the huge pages. See the discussion of 76 Using Huge Pages, below. 77 78 The administrator can allocate persistent huge pages on the kernel boot 79 command line by specifying the "hugepages=N" parameter, where 'N' = the 80 number of huge pages requested. This is the most reliable method of 81 allocating huge pages as memory has not yet become fragmented. 82 83 Some platforms support multiple huge page sizes. To allocate huge pages 84 of a specific size, one must precede the huge pages boot command parameters 85 with a huge page size selection parameter "hugepagesz=<size>". <size> must 86 be specified in bytes with optional scale suffix [kKmMgG]. The default huge 87 page size may be selected with the "default_hugepagesz=<size>" boot parameter. 88 89 When multiple huge page sizes are supported, /proc/sys/vm/nr_hugepages 90 indicates the current number of pre-allocated huge pages of the default size. 91 Thus, one can use the following command to dynamically allocate/deallocate 92 default sized persistent huge pages: 93 94 echo 20 > /proc/sys/vm/nr_hugepages 95 96 This command will try to adjust the number of default sized huge pages in the 97 huge page pool to 20, allocating or freeing huge pages, as required. 98 99 On a NUMA platform, the kernel will attempt to distribute the huge page pool 100 over all the set of allowed nodes specified by the NUMA memory policy of the 101 task that modifies nr_hugepages. The default for the allowed nodes--when the 102 task has default memory policy--is all on-line nodes with memory. Allowed 103 nodes with insufficient available, contiguous memory for a huge page will be 104 silently skipped when allocating persistent huge pages. See the discussion 105 below of the interaction of task memory policy, cpusets and per node attributes 106 with the allocation and freeing of persistent huge pages. 107 108 The success or failure of huge page allocation depends on the amount of 109 physically contiguous memory that is present in system at the time of the 110 allocation attempt. If the kernel is unable to allocate huge pages from 111 some nodes in a NUMA system, it will attempt to make up the difference by 112 allocating extra pages on other nodes with sufficient available contiguous 113 memory, if any. 114 115 System administrators may want to put this command in one of the local rc 116 init files. This will enable the kernel to allocate huge pages early in 117 the boot process when the possibility of getting physical contiguous pages 118 is still very high. Administrators can verify the number of huge pages 119 actually allocated by checking the sysctl or meminfo. To check the per node 120 distribution of huge pages in a NUMA system, use: 121 122 cat /sys/devices/system/node/node*/meminfo | fgrep Huge 123 124 /proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of 125 huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are 126 requested by applications. Writing any non-zero value into this file 127 indicates that the hugetlb subsystem is allowed to try to obtain that 128 number of "surplus" huge pages from the kernel's normal page pool, when the 129 persistent huge page pool is exhausted. As these surplus huge pages become 130 unused, they are freed back to the kernel's normal page pool. 131 132 When increasing the huge page pool size via nr_hugepages, any existing surplus 133 pages will first be promoted to persistent huge pages. Then, additional 134 huge pages will be allocated, if necessary and if possible, to fulfill 135 the new persistent huge page pool size. 136 137 The administrator may shrink the pool of persistent huge pages for 138 the default huge page size by setting the nr_hugepages sysctl to a 139 smaller value. The kernel will attempt to balance the freeing of huge pages 140 across all nodes in the memory policy of the task modifying nr_hugepages. 141 Any free huge pages on the selected nodes will be freed back to the kernel's 142 normal page pool. 143 144 Caveat: Shrinking the persistent huge page pool via nr_hugepages such that 145 it becomes less than the number of huge pages in use will convert the balance 146 of the in-use huge pages to surplus huge pages. This will occur even if 147 the number of surplus pages it would exceed the overcommit value. As long as 148 this condition holds--that is, until nr_hugepages+nr_overcommit_hugepages is 149 increased sufficiently, or the surplus huge pages go out of use and are freed-- 150 no more surplus huge pages will be allowed to be allocated. 151 152 With support for multiple huge page pools at run-time available, much of 153 the huge page userspace interface in /proc/sys/vm has been duplicated in sysfs. 154 The /proc interfaces discussed above have been retained for backwards 155 compatibility. The root huge page control directory in sysfs is: 156 157 /sys/kernel/mm/hugepages 158 159 For each huge page size supported by the running kernel, a subdirectory 160 will exist, of the form: 161 162 hugepages-${size}kB 163 164 Inside each of these directories, the same set of files will exist: 165 166 nr_hugepages 167 nr_hugepages_mempolicy 168 nr_overcommit_hugepages 169 free_hugepages 170 resv_hugepages 171 surplus_hugepages 172 173 which function as described above for the default huge page-sized case. 174 175 176 Interaction of Task Memory Policy with Huge Page Allocation/Freeing 177 =================================================================== 178 179 Whether huge pages are allocated and freed via the /proc interface or 180 the /sysfs interface using the nr_hugepages_mempolicy attribute, the NUMA 181 nodes from which huge pages are allocated or freed are controlled by the 182 NUMA memory policy of the task that modifies the nr_hugepages_mempolicy 183 sysctl or attribute. When the nr_hugepages attribute is used, mempolicy 184 is ignored. 185 186 The recommended method to allocate or free huge pages to/from the kernel 187 huge page pool, using the nr_hugepages example above, is: 188 189 numactl --interleave <node-list> echo 20 \ 190 >/proc/sys/vm/nr_hugepages_mempolicy 191 192 or, more succinctly: 193 194 numactl -m <node-list> echo 20 >/proc/sys/vm/nr_hugepages_mempolicy 195 196 This will allocate or free abs(20 - nr_hugepages) to or from the nodes 197 specified in <node-list>, depending on whether number of persistent huge pages 198 is initially less than or greater than 20, respectively. No huge pages will be 199 allocated nor freed on any node not included in the specified <node-list>. 200 201 When adjusting the persistent hugepage count via nr_hugepages_mempolicy, any 202 memory policy mode--bind, preferred, local or interleave--may be used. The 203 resulting effect on persistent huge page allocation is as follows: 204 205 1) Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.txt], 206 persistent huge pages will be distributed across the node or nodes 207 specified in the mempolicy as if "interleave" had been specified. 208 However, if a node in the policy does not contain sufficient contiguous 209 memory for a huge page, the allocation will not "fallback" to the nearest 210 neighbor node with sufficient contiguous memory. To do this would cause 211 undesirable imbalance in the distribution of the huge page pool, or 212 possibly, allocation of persistent huge pages on nodes not allowed by 213 the task's memory policy. 214 215 2) One or more nodes may be specified with the bind or interleave policy. 216 If more than one node is specified with the preferred policy, only the 217 lowest numeric id will be used. Local policy will select the node where 218 the task is running at the time the nodes_allowed mask is constructed. 219 For local policy to be deterministic, the task must be bound to a cpu or 220 cpus in a single node. Otherwise, the task could be migrated to some 221 other node at any time after launch and the resulting node will be 222 indeterminate. Thus, local policy is not very useful for this purpose. 223 Any of the other mempolicy modes may be used to specify a single node. 224 225 3) The nodes allowed mask will be derived from any non-default task mempolicy, 226 whether this policy was set explicitly by the task itself or one of its 227 ancestors, such as numactl. This means that if the task is invoked from a 228 shell with non-default policy, that policy will be used. One can specify a 229 node list of "all" with numactl --interleave or --membind [-m] to achieve 230 interleaving over all nodes in the system or cpuset. 231 232 4) Any task mempolicy specified--e.g., using numactl--will be constrained by 233 the resource limits of any cpuset in which the task runs. Thus, there will 234 be no way for a task with non-default policy running in a cpuset with a 235 subset of the system nodes to allocate huge pages outside the cpuset 236 without first moving to a cpuset that contains all of the desired nodes. 237 238 5) Boot-time huge page allocation attempts to distribute the requested number 239 of huge pages over all on-lines nodes with memory. 240 241 Per Node Hugepages Attributes 242 ============================= 243 244 A subset of the contents of the root huge page control directory in sysfs, 245 described above, will be replicated under each the system device of each 246 NUMA node with memory in: 247 248 /sys/devices/system/node/node[0-9]*/hugepages/ 249 250 Under this directory, the subdirectory for each supported huge page size 251 contains the following attribute files: 252 253 nr_hugepages 254 free_hugepages 255 surplus_hugepages 256 257 The free_' and surplus_' attribute files are read-only. They return the number 258 of free and surplus [overcommitted] huge pages, respectively, on the parent 259 node. 260 261 The nr_hugepages attribute returns the total number of huge pages on the 262 specified node. When this attribute is written, the number of persistent huge 263 pages on the parent node will be adjusted to the specified value, if sufficient 264 resources exist, regardless of the task's mempolicy or cpuset constraints. 265 266 Note that the number of overcommit and reserve pages remain global quantities, 267 as we don't know until fault time, when the faulting task's mempolicy is 268 applied, from which node the huge page allocation will be attempted. 269 270 271 Using Huge Pages 272 ================ 273 274 If the user applications are going to request huge pages using mmap system 275 call, then it is required that system administrator mount a file system of 276 type hugetlbfs: 277 278 mount -t hugetlbfs \ 279 -o uid=<value>,gid=<value>,mode=<value>,pagesize=<value>,size=<value>,\ 280 min_size=<value>,nr_inodes=<value> none /mnt/huge 281 282 This command mounts a (pseudo) filesystem of type hugetlbfs on the directory 283 /mnt/huge. Any files created on /mnt/huge uses huge pages. The uid and gid 284 options sets the owner and group of the root of the file system. By default 285 the uid and gid of the current process are taken. The mode option sets the 286 mode of root of file system to value & 01777. This value is given in octal. 287 By default the value 0755 is picked. If the platform supports multiple huge 288 page sizes, the pagesize option can be used to specify the huge page size and 289 associated pool. pagesize is specified in bytes. If pagesize is not specified 290 the platform's default huge page size and associated pool will be used. The 291 size option sets the maximum value of memory (huge pages) allowed for that 292 filesystem (/mnt/huge). The size option can be specified in bytes, or as a 293 percentage of the specified huge page pool (nr_hugepages). The size is 294 rounded down to HPAGE_SIZE boundary. The min_size option sets the minimum 295 value of memory (huge pages) allowed for the filesystem. min_size can be 296 specified in the same way as size, either bytes or a percentage of the 297 huge page pool. At mount time, the number of huge pages specified by 298 min_size are reserved for use by the filesystem. If there are not enough 299 free huge pages available, the mount will fail. As huge pages are allocated 300 to the filesystem and freed, the reserve count is adjusted so that the sum 301 of allocated and reserved huge pages is always at least min_size. The option 302 nr_inodes sets the maximum number of inodes that /mnt/huge can use. If the 303 size, min_size or nr_inodes option is not provided on command line then 304 no limits are set. For pagesize, size, min_size and nr_inodes options, you 305 can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. For example, size=2K 306 has the same meaning as size=2048. 307 308 While read system calls are supported on files that reside on hugetlb 309 file systems, write system calls are not. 310 311 Regular chown, chgrp, and chmod commands (with right permissions) could be 312 used to change the file attributes on hugetlbfs. 313 314 Also, it is important to note that no such mount command is required if 315 applications are going to use only shmat/shmget system calls or mmap with 316 MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see map_hugetlb 317 below. 318 319 Users who wish to use hugetlb memory via shared memory segment should be a 320 member of a supplementary group and system admin needs to configure that gid 321 into /proc/sys/vm/hugetlb_shm_group. It is possible for same or different 322 applications to use any combination of mmaps and shm* calls, though the mount of 323 filesystem will be required for using mmap calls without MAP_HUGETLB. 324 325 Syscalls that operate on memory backed by hugetlb pages only have their lengths 326 aligned to the native page size of the processor; they will normally fail with 327 errno set to EINVAL or exclude hugetlb pages that extend beyond the length if 328 not hugepage aligned. For example, munmap(2) will fail if memory is backed by 329 a hugetlb page and the length is smaller than the hugepage size. 330 331 332 Examples 333 ======== 334 335 1) map_hugetlb: see tools/testing/selftests/vm/map_hugetlb.c 336 337 2) hugepage-shm: see tools/testing/selftests/vm/hugepage-shm.c 338 339 3) hugepage-mmap: see tools/testing/selftests/vm/hugepage-mmap.c 340 341 4) The libhugetlbfs (https://github.com/libhugetlbfs/libhugetlbfs) library 342 provides a wide range of userspace tools to help with huge page usability, 343 environment setup, and control. 344 345 Kernel development regression testing 346 ===================================== 347 348 The most complete set of hugetlb tests are in the libhugetlbfs repository. 349 If you modify any hugetlb related code, use the libhugetlbfs test suite 350 to check for regressions. In addition, if you add any new hugetlb 351 functionality, please add appropriate tests to libhugetlbfs.