About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / vm / hugetlbpage.txt




Custom Search

Based on kernel version 3.13. Page generated on 2014-01-20 22:05 EST.

1	
2	The intent of this file is to give a brief summary of hugetlbpage support in
3	the Linux kernel.  This support is built on top of multiple page size support
4	that is provided by most modern architectures.  For example, i386
5	architecture supports 4K and 4M (2M in PAE mode) page sizes, ia64
6	architecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M,
7	256M and ppc64 supports 4K and 16M.  A TLB is a cache of virtual-to-physical
8	translations.  Typically this is a very scarce resource on processor.
9	Operating systems try to make best use of limited number of TLB resources.
10	This optimization is more critical now as bigger and bigger physical memories
11	(several GBs) are more readily available.
12	
13	Users can use the huge page support in Linux kernel by either using the mmap
14	system call or standard SYSV shared memory system calls (shmget, shmat).
15	
16	First the Linux kernel needs to be built with the CONFIG_HUGETLBFS
17	(present under "File systems") and CONFIG_HUGETLB_PAGE (selected
18	automatically when CONFIG_HUGETLBFS is selected) configuration
19	options.
20	
21	The /proc/meminfo file provides information about the total number of
22	persistent hugetlb pages in the kernel's huge page pool.  It also displays
23	information about the number of free, reserved and surplus huge pages and the
24	default huge page size.  The huge page size is needed for generating the
25	proper alignment and size of the arguments to system calls that map huge page
26	regions.
27	
28	The output of "cat /proc/meminfo" will include lines like:
29	
30	.....
31	HugePages_Total: vvv
32	HugePages_Free:  www
33	HugePages_Rsvd:  xxx
34	HugePages_Surp:  yyy
35	Hugepagesize:    zzz kB
36	
37	where:
38	HugePages_Total is the size of the pool of huge pages.
39	HugePages_Free  is the number of huge pages in the pool that are not yet
40	                allocated.
41	HugePages_Rsvd  is short for "reserved," and is the number of huge pages for
42	                which a commitment to allocate from the pool has been made,
43	                but no allocation has yet been made.  Reserved huge pages
44	                guarantee that an application will be able to allocate a
45	                huge page from the pool of huge pages at fault time.
46	HugePages_Surp  is short for "surplus," and is the number of huge pages in
47	                the pool above the value in /proc/sys/vm/nr_hugepages. The
48	                maximum number of surplus huge pages is controlled by
49	                /proc/sys/vm/nr_overcommit_hugepages.
50	
51	/proc/filesystems should also show a filesystem of type "hugetlbfs" configured
52	in the kernel.
53	
54	/proc/sys/vm/nr_hugepages indicates the current number of "persistent" huge
55	pages in the kernel's huge page pool.  "Persistent" huge pages will be
56	returned to the huge page pool when freed by a task.  A user with root
57	privileges can dynamically allocate more or free some persistent huge pages
58	by increasing or decreasing the value of 'nr_hugepages'.
59	
60	Pages that are used as huge pages are reserved inside the kernel and cannot
61	be used for other purposes.  Huge pages cannot be swapped out under
62	memory pressure.
63	
64	Once a number of huge pages have been pre-allocated to the kernel huge page
65	pool, a user with appropriate privilege can use either the mmap system call
66	or shared memory system calls to use the huge pages.  See the discussion of
67	Using Huge Pages, below.
68	
69	The administrator can allocate persistent huge pages on the kernel boot
70	command line by specifying the "hugepages=N" parameter, where 'N' = the
71	number of huge pages requested.  This is the most reliable method of
72	allocating huge pages as memory has not yet become fragmented.
73	
74	Some platforms support multiple huge page sizes.  To allocate huge pages
75	of a specific size, one must precede the huge pages boot command parameters
76	with a huge page size selection parameter "hugepagesz=<size>".  <size> must
77	be specified in bytes with optional scale suffix [kKmMgG].  The default huge
78	page size may be selected with the "default_hugepagesz=<size>" boot parameter.
79	
80	When multiple huge page sizes are supported, /proc/sys/vm/nr_hugepages
81	indicates the current number of pre-allocated huge pages of the default size.
82	Thus, one can use the following command to dynamically allocate/deallocate
83	default sized persistent huge pages:
84	
85		echo 20 > /proc/sys/vm/nr_hugepages
86	
87	This command will try to adjust the number of default sized huge pages in the
88	huge page pool to 20, allocating or freeing huge pages, as required.
89	
90	On a NUMA platform, the kernel will attempt to distribute the huge page pool
91	over all the set of allowed nodes specified by the NUMA memory policy of the
92	task that modifies nr_hugepages.  The default for the allowed nodes--when the
93	task has default memory policy--is all on-line nodes with memory.  Allowed
94	nodes with insufficient available, contiguous memory for a huge page will be
95	silently skipped when allocating persistent huge pages.  See the discussion
96	below of the interaction of task memory policy, cpusets and per node attributes
97	with the allocation and freeing of persistent huge pages.
98	
99	The success or failure of huge page allocation depends on the amount of
100	physically contiguous memory that is present in system at the time of the
101	allocation attempt.  If the kernel is unable to allocate huge pages from
102	some nodes in a NUMA system, it will attempt to make up the difference by
103	allocating extra pages on other nodes with sufficient available contiguous
104	memory, if any.
105	
106	System administrators may want to put this command in one of the local rc
107	init files.  This will enable the kernel to allocate huge pages early in
108	the boot process when the possibility of getting physical contiguous pages
109	is still very high.  Administrators can verify the number of huge pages
110	actually allocated by checking the sysctl or meminfo.  To check the per node
111	distribution of huge pages in a NUMA system, use:
112	
113		cat /sys/devices/system/node/node*/meminfo | fgrep Huge
114	
115	/proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of
116	huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are
117	requested by applications.  Writing any non-zero value into this file
118	indicates that the hugetlb subsystem is allowed to try to obtain that
119	number of "surplus" huge pages from the kernel's normal page pool, when the
120	persistent huge page pool is exhausted. As these surplus huge pages become
121	unused, they are freed back to the kernel's normal page pool.
122	
123	When increasing the huge page pool size via nr_hugepages, any existing surplus
124	pages will first be promoted to persistent huge pages.  Then, additional
125	huge pages will be allocated, if necessary and if possible, to fulfill
126	the new persistent huge page pool size.
127	
128	The administrator may shrink the pool of persistent huge pages for
129	the default huge page size by setting the nr_hugepages sysctl to a
130	smaller value.  The kernel will attempt to balance the freeing of huge pages
131	across all nodes in the memory policy of the task modifying nr_hugepages.
132	Any free huge pages on the selected nodes will be freed back to the kernel's
133	normal page pool.
134	
135	Caveat: Shrinking the persistent huge page pool via nr_hugepages such that
136	it becomes less than the number of huge pages in use will convert the balance
137	of the in-use huge pages to surplus huge pages.  This will occur even if
138	the number of surplus pages it would exceed the overcommit value.  As long as
139	this condition holds--that is, until nr_hugepages+nr_overcommit_hugepages is
140	increased sufficiently, or the surplus huge pages go out of use and are freed--
141	no more surplus huge pages will be allowed to be allocated.
142	
143	With support for multiple huge page pools at run-time available, much of
144	the huge page userspace interface in /proc/sys/vm has been duplicated in sysfs.
145	The /proc interfaces discussed above have been retained for backwards
146	compatibility. The root huge page control directory in sysfs is:
147	
148		/sys/kernel/mm/hugepages
149	
150	For each huge page size supported by the running kernel, a subdirectory
151	will exist, of the form:
152	
153		hugepages-${size}kB
154	
155	Inside each of these directories, the same set of files will exist:
156	
157		nr_hugepages
158		nr_hugepages_mempolicy
159		nr_overcommit_hugepages
160		free_hugepages
161		resv_hugepages
162		surplus_hugepages
163	
164	which function as described above for the default huge page-sized case.
165	
166	
167	Interaction of Task Memory Policy with Huge Page Allocation/Freeing
168	===================================================================
169	
170	Whether huge pages are allocated and freed via the /proc interface or
171	the /sysfs interface using the nr_hugepages_mempolicy attribute, the NUMA
172	nodes from which huge pages are allocated or freed are controlled by the
173	NUMA memory policy of the task that modifies the nr_hugepages_mempolicy
174	sysctl or attribute.  When the nr_hugepages attribute is used, mempolicy
175	is ignored.
176	
177	The recommended method to allocate or free huge pages to/from the kernel
178	huge page pool, using the nr_hugepages example above, is:
179	
180	    numactl --interleave <node-list> echo 20 \
181					>/proc/sys/vm/nr_hugepages_mempolicy
182	
183	or, more succinctly:
184	
185	    numactl -m <node-list> echo 20 >/proc/sys/vm/nr_hugepages_mempolicy
186	
187	This will allocate or free abs(20 - nr_hugepages) to or from the nodes
188	specified in <node-list>, depending on whether number of persistent huge pages
189	is initially less than or greater than 20, respectively.  No huge pages will be
190	allocated nor freed on any node not included in the specified <node-list>.
191	
192	When adjusting the persistent hugepage count via nr_hugepages_mempolicy, any
193	memory policy mode--bind, preferred, local or interleave--may be used.  The
194	resulting effect on persistent huge page allocation is as follows:
195	
196	1) Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.txt],
197	   persistent huge pages will be distributed across the node or nodes
198	   specified in the mempolicy as if "interleave" had been specified.
199	   However, if a node in the policy does not contain sufficient contiguous
200	   memory for a huge page, the allocation will not "fallback" to the nearest
201	   neighbor node with sufficient contiguous memory.  To do this would cause
202	   undesirable imbalance in the distribution of the huge page pool, or
203	   possibly, allocation of persistent huge pages on nodes not allowed by
204	   the task's memory policy.
205	
206	2) One or more nodes may be specified with the bind or interleave policy.
207	   If more than one node is specified with the preferred policy, only the
208	   lowest numeric id will be used.  Local policy will select the node where
209	   the task is running at the time the nodes_allowed mask is constructed.
210	   For local policy to be deterministic, the task must be bound to a cpu or
211	   cpus in a single node.  Otherwise, the task could be migrated to some
212	   other node at any time after launch and the resulting node will be
213	   indeterminate.  Thus, local policy is not very useful for this purpose.
214	   Any of the other mempolicy modes may be used to specify a single node.
215	
216	3) The nodes allowed mask will be derived from any non-default task mempolicy,
217	   whether this policy was set explicitly by the task itself or one of its
218	   ancestors, such as numactl.  This means that if the task is invoked from a
219	   shell with non-default policy, that policy will be used.  One can specify a
220	   node list of "all" with numactl --interleave or --membind [-m] to achieve
221	   interleaving over all nodes in the system or cpuset.
222	
223	4) Any task mempolicy specifed--e.g., using numactl--will be constrained by
224	   the resource limits of any cpuset in which the task runs.  Thus, there will
225	   be no way for a task with non-default policy running in a cpuset with a
226	   subset of the system nodes to allocate huge pages outside the cpuset
227	   without first moving to a cpuset that contains all of the desired nodes.
228	
229	5) Boot-time huge page allocation attempts to distribute the requested number
230	   of huge pages over all on-lines nodes with memory.
231	
232	Per Node Hugepages Attributes
233	=============================
234	
235	A subset of the contents of the root huge page control directory in sysfs,
236	described above, will be replicated under each the system device of each
237	NUMA node with memory in:
238	
239		/sys/devices/system/node/node[0-9]*/hugepages/
240	
241	Under this directory, the subdirectory for each supported huge page size
242	contains the following attribute files:
243	
244		nr_hugepages
245		free_hugepages
246		surplus_hugepages
247	
248	The free_' and surplus_' attribute files are read-only.  They return the number
249	of free and surplus [overcommitted] huge pages, respectively, on the parent
250	node.
251	
252	The nr_hugepages attribute returns the total number of huge pages on the
253	specified node.  When this attribute is written, the number of persistent huge
254	pages on the parent node will be adjusted to the specified value, if sufficient
255	resources exist, regardless of the task's mempolicy or cpuset constraints.
256	
257	Note that the number of overcommit and reserve pages remain global quantities,
258	as we don't know until fault time, when the faulting task's mempolicy is
259	applied, from which node the huge page allocation will be attempted.
260	
261	
262	Using Huge Pages
263	================
264	
265	If the user applications are going to request huge pages using mmap system
266	call, then it is required that system administrator mount a file system of
267	type hugetlbfs:
268	
269	  mount -t hugetlbfs \
270		-o uid=<value>,gid=<value>,mode=<value>,size=<value>,nr_inodes=<value> \
271		none /mnt/huge
272	
273	This command mounts a (pseudo) filesystem of type hugetlbfs on the directory
274	/mnt/huge.  Any files created on /mnt/huge uses huge pages.  The uid and gid
275	options sets the owner and group of the root of the file system.  By default
276	the uid and gid of the current process are taken.  The mode option sets the
277	mode of root of file system to value & 0777.  This value is given in octal.
278	By default the value 0755 is picked. The size option sets the maximum value of
279	memory (huge pages) allowed for that filesystem (/mnt/huge). The size is
280	rounded down to HPAGE_SIZE.  The option nr_inodes sets the maximum number of
281	inodes that /mnt/huge can use.  If the size or nr_inodes option is not
282	provided on command line then no limits are set.  For size and nr_inodes
283	options, you can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. For
284	example, size=2K has the same meaning as size=2048.
285	
286	While read system calls are supported on files that reside on hugetlb
287	file systems, write system calls are not.
288	
289	Regular chown, chgrp, and chmod commands (with right permissions) could be
290	used to change the file attributes on hugetlbfs.
291	
292	Also, it is important to note that no such mount command is required if the
293	applications are going to use only shmat/shmget system calls or mmap with
294	MAP_HUGETLB.  Users who wish to use hugetlb page via shared memory segment
295	should be a member of a supplementary group and system admin needs to
296	configure that gid into /proc/sys/vm/hugetlb_shm_group.  It is possible for
297	same or different applications to use any combination of mmaps and shm*
298	calls, though the mount of filesystem will be required for using mmap calls
299	without MAP_HUGETLB.  For an example of how to use mmap with MAP_HUGETLB see
300	map_hugetlb.c.
301	
302	Examples
303	========
304	
305	1) map_hugetlb: see tools/testing/selftests/vm/map_hugetlb.c
306	
307	2) hugepage-shm:  see tools/testing/selftests/vm/hugepage-shm.c
308	
309	3) hugepage-mmap:  see tools/testing/selftests/vm/hugepage-mmap.c
310	
311	4) The libhugetlbfs (http://libhugetlbfs.sourceforge.net) library provides a
312	   wide range of userspace tools to help with huge page usability, environment
313	   setup, and control. Furthermore it provides useful test cases that should be
314	   used when modifying code to ensure no regressions are introduced.
Hide Line Numbers
About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Information is copyright its respective author. All material is available from the Linux Kernel Source distributed under a GPL License. This page is provided as a free service by mjmwired.net.