About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / vm / numa_memory_policy.txt

Custom Search

Based on kernel version 3.13. Page generated on 2014-01-20 22:05 EST.

2	What is Linux Memory Policy?
4	In the Linux kernel, "memory policy" determines from which node the kernel will
5	allocate memory in a NUMA system or in an emulated NUMA system.  Linux has
6	supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
7	The current memory policy support was added to Linux 2.6 around May 2004.  This
8	document attempts to describe the concepts and APIs of the 2.6 memory policy
9	support.
11	Memory policies should not be confused with cpusets
12	(Documentation/cgroups/cpusets.txt)
13	which is an administrative mechanism for restricting the nodes from which
14	memory may be allocated by a set of processes. Memory policies are a
15	programming interface that a NUMA-aware application can take advantage of.  When
16	both cpusets and policies are applied to a task, the restrictions of the cpuset
17	takes priority.  See "MEMORY POLICIES AND CPUSETS" below for more details.
21	Scope of Memory Policies
23	The Linux kernel supports _scopes_ of memory policy, described here from
24	most general to most specific:
26	    System Default Policy:  this policy is "hard coded" into the kernel.  It
27	    is the policy that governs all page allocations that aren't controlled
28	    by one of the more specific policy scopes discussed below.  When the
29	    system is "up and running", the system default policy will use "local
30	    allocation" described below.  However, during boot up, the system
31	    default policy will be set to interleave allocations across all nodes
32	    with "sufficient" memory, so as not to overload the initial boot node
33	    with boot-time allocations.
35	    Task/Process Policy:  this is an optional, per-task policy.  When defined
36	    for a specific task, this policy controls all page allocations made by or
37	    on behalf of the task that aren't controlled by a more specific scope.
38	    If a task does not define a task policy, then all page allocations that
39	    would have been controlled by the task policy "fall back" to the System
40	    Default Policy.
42		The task policy applies to the entire address space of a task. Thus,
43		it is inheritable, and indeed is inherited, across both fork()
44		[clone() w/o the CLONE_VM flag] and exec*().  This allows a parent task
45		to establish the task policy for a child task exec()'d from an
46		executable image that has no awareness of memory policy.  See the
47		MEMORY POLICY APIS section, below, for an overview of the system call
48		that a task may use to set/change its task/process policy.
50		In a multi-threaded task, task policies apply only to the thread
51		[Linux kernel task] that installs the policy and any threads
52		subsequently created by that thread.  Any sibling threads existing
53		at the time a new task policy is installed retain their current
54		policy.
56		A task policy applies only to pages allocated after the policy is
57		installed.  Any pages already faulted in by the task when the task
58		changes its task policy remain where they were allocated based on
59		the policy at the time they were allocated.
61	    VMA Policy:  A "VMA" or "Virtual Memory Area" refers to a range of a task's
62	    virtual address space.  A task may define a specific policy for a range
63	    of its virtual address space.   See the MEMORY POLICIES APIS section,
64	    below, for an overview of the mbind() system call used to set a VMA
65	    policy.
67	    A VMA policy will govern the allocation of pages that back this region of
68	    the address space.  Any regions of the task's address space that don't
69	    have an explicit VMA policy will fall back to the task policy, which may
70	    itself fall back to the System Default Policy.
72	    VMA policies have a few complicating details:
74		VMA policy applies ONLY to anonymous pages.  These include pages
75		allocated for anonymous segments, such as the task stack and heap, and
76		any regions of the address space mmap()ed with the MAP_ANONYMOUS flag.
77		If a VMA policy is applied to a file mapping, it will be ignored if
78		the mapping used the MAP_SHARED flag.  If the file mapping used the
79		MAP_PRIVATE flag, the VMA policy will only be applied when an
80		anonymous page is allocated on an attempt to write to the mapping--
81		i.e., at Copy-On-Write.
83		VMA policies are shared between all tasks that share a virtual address
84		space--a.k.a. threads--independent of when the policy is installed; and
85		they are inherited across fork().  However, because VMA policies refer
86		to a specific region of a task's address space, and because the address
87		space is discarded and recreated on exec*(), VMA policies are NOT
88		inheritable across exec().  Thus, only NUMA-aware applications may
89		use VMA policies.
91		A task may install a new VMA policy on a sub-range of a previously
92		mmap()ed region.  When this happens, Linux splits the existing virtual
93		memory area into 2 or 3 VMAs, each with it's own policy.
95		By default, VMA policy applies only to pages allocated after the policy
96		is installed.  Any pages already faulted into the VMA range remain
97		where they were allocated based on the policy at the time they were
98		allocated.  However, since 2.6.16, Linux supports page migration via
99		the mbind() system call, so that page contents can be moved to match
100		a newly installed policy.
102	    Shared Policy:  Conceptually, shared policies apply to "memory objects"
103	    mapped shared into one or more tasks' distinct address spaces.  An
104	    application installs a shared policies the same way as VMA policies--using
105	    the mbind() system call specifying a range of virtual addresses that map
106	    the shared object.  However, unlike VMA policies, which can be considered
107	    to be an attribute of a range of a task's address space, shared policies
108	    apply directly to the shared object.  Thus, all tasks that attach to the
109	    object share the policy, and all pages allocated for the shared object,
110	    by any task, will obey the shared policy.
112		As of 2.6.22, only shared memory segments, created by shmget() or
113		mmap(MAP_ANONYMOUS|MAP_SHARED), support shared policy.  When shared
114		policy support was added to Linux, the associated data structures were
115		added to hugetlbfs shmem segments.  At the time, hugetlbfs did not
116		support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
117		shmem segments were never "hooked up" to the shared policy support.
118		Although hugetlbfs segments now support lazy allocation, their support
119		for shared policy has not been completed.
121		As mentioned above [re: VMA policies], allocations of page cache
122		pages for regular files mmap()ed with MAP_SHARED ignore any VMA
123		policy installed on the virtual address range backed by the shared
124		file mapping.  Rather, shared page cache pages, including pages backing
125		private mappings that have not yet been written by the task, follow
126		task policy, if any, else System Default Policy.
128		The shared policy infrastructure supports different policies on subset
129		ranges of the shared object.  However, Linux still splits the VMA of
130		the task that installs the policy for each range of distinct policy.
131		Thus, different tasks that attach to a shared memory segment can have
132		different VMA configurations mapping that one shared object.  This
133		can be seen by examining the /proc/<pid>/numa_maps of tasks sharing
134		a shared memory region, when one task has installed shared policy on
135		one or more ranges of the region.
137	Components of Memory Policies
139	    A Linux memory policy consists of a "mode", optional mode flags, and an
140	    optional set of nodes.  The mode determines the behavior of the policy,
141	    the optional mode flags determine the behavior of the mode, and the
142	    optional set of nodes can be viewed as the arguments to the policy
143	    behavior.
145	   Internally, memory policies are implemented by a reference counted
146	   structure, struct mempolicy.  Details of this structure will be discussed
147	   in context, below, as required to explain the behavior.
149	   Linux memory policy supports the following 4 behavioral modes:
151		Default Mode--MPOL_DEFAULT:  This mode is only used in the memory
152		policy APIs.  Internally, MPOL_DEFAULT is converted to the NULL
153		memory policy in all policy scopes.  Any existing non-default policy
154		will simply be removed when MPOL_DEFAULT is specified.  As a result,
155		MPOL_DEFAULT means "fall back to the next most specific policy scope."
157		    For example, a NULL or default task policy will fall back to the
158		    system default policy.  A NULL or default vma policy will fall
159		    back to the task policy.
161		    When specified in one of the memory policy APIs, the Default mode
162		    does not use the optional set of nodes.
164		    It is an error for the set of nodes specified for this policy to
165		    be non-empty.
167		MPOL_BIND:  This mode specifies that memory must come from the
168		set of nodes specified by the policy.  Memory will be allocated from
169		the node in the set with sufficient free memory that is closest to
170		the node where the allocation takes place.
172		MPOL_PREFERRED:  This mode specifies that the allocation should be
173		attempted from the single node specified in the policy.  If that
174		allocation fails, the kernel will search other nodes, in order of
175		increasing distance from the preferred node based on information
176		provided by the platform firmware.
177		containing the cpu where the allocation takes place.
179		    Internally, the Preferred policy uses a single node--the
180		    preferred_node member of struct mempolicy.  When the internal
181		    mode flag MPOL_F_LOCAL is set, the preferred_node is ignored and
182		    the policy is interpreted as local allocation.  "Local" allocation
183		    policy can be viewed as a Preferred policy that starts at the node
184		    containing the cpu where the allocation takes place.
186		    It is possible for the user to specify that local allocation is
187		    always preferred by passing an empty nodemask with this mode.
188		    If an empty nodemask is passed, the policy cannot use the
189		    MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES flags described
190		    below.
192		MPOL_INTERLEAVED:  This mode specifies that page allocations be
193		interleaved, on a page granularity, across the nodes specified in
194		the policy.  This mode also behaves slightly differently, based on
195		the context where it is used:
197		    For allocation of anonymous pages and shared memory pages,
198		    Interleave mode indexes the set of nodes specified by the policy
199		    using the page offset of the faulting address into the segment
200		    [VMA] containing the address modulo the number of nodes specified
201		    by the policy.  It then attempts to allocate a page, starting at
202		    the selected node, as if the node had been specified by a Preferred
203		    policy or had been selected by a local allocation.  That is,
204		    allocation will follow the per node zonelist.
206		    For allocation of page cache pages, Interleave mode indexes the set
207		    of nodes specified by the policy using a node counter maintained
208		    per task.  This counter wraps around to the lowest specified node
209		    after it reaches the highest specified node.  This will tend to
210		    spread the pages out over the nodes specified by the policy based
211		    on the order in which they are allocated, rather than based on any
212		    page offset into an address range or file.  During system boot up,
213		    the temporary interleaved system default policy works in this
214		    mode.
216	   Linux memory policy supports the following optional mode flags:
218		MPOL_F_STATIC_NODES:  This flag specifies that the nodemask passed by
219		the user should not be remapped if the task or VMA's set of allowed
220		nodes changes after the memory policy has been defined.
222		    Without this flag, anytime a mempolicy is rebound because of a
223		    change in the set of allowed nodes, the node (Preferred) or
224		    nodemask (Bind, Interleave) is remapped to the new set of
225		    allowed nodes.  This may result in nodes being used that were
226		    previously undesired.
228		    With this flag, if the user-specified nodes overlap with the
229		    nodes allowed by the task's cpuset, then the memory policy is
230		    applied to their intersection.  If the two sets of nodes do not
231		    overlap, the Default policy is used.
233		    For example, consider a task that is attached to a cpuset with
234		    mems 1-3 that sets an Interleave policy over the same set.  If
235		    the cpuset's mems change to 3-5, the Interleave will now occur
236		    over nodes 3, 4, and 5.  With this flag, however, since only node
237		    3 is allowed from the user's nodemask, the "interleave" only
238		    occurs over that node.  If no nodes from the user's nodemask are
239		    now allowed, the Default behavior is used.
241		    MPOL_F_STATIC_NODES cannot be combined with the
242		    MPOL_F_RELATIVE_NODES flag.  It also cannot be used for
243		    MPOL_PREFERRED policies that were created with an empty nodemask
244		    (local allocation).
246		MPOL_F_RELATIVE_NODES:  This flag specifies that the nodemask passed
247		by the user will be mapped relative to the set of the task or VMA's
248		set of allowed nodes.  The kernel stores the user-passed nodemask,
249		and if the allowed nodes changes, then that original nodemask will
250		be remapped relative to the new set of allowed nodes.
252		    Without this flag (and without MPOL_F_STATIC_NODES), anytime a
253		    mempolicy is rebound because of a change in the set of allowed
254		    nodes, the node (Preferred) or nodemask (Bind, Interleave) is
255		    remapped to the new set of allowed nodes.  That remap may not
256		    preserve the relative nature of the user's passed nodemask to its
257		    set of allowed nodes upon successive rebinds: a nodemask of
258		    1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
259		    allowed nodes is restored to its original state.
261		    With this flag, the remap is done so that the node numbers from
262		    the user's passed nodemask are relative to the set of allowed
263		    nodes.  In other words, if nodes 0, 2, and 4 are set in the user's
264		    nodemask, the policy will be effected over the first (and in the
265		    Bind or Interleave case, the third and fifth) nodes in the set of
266		    allowed nodes.  The nodemask passed by the user represents nodes
267		    relative to task or VMA's set of allowed nodes.
269		    If the user's nodemask includes nodes that are outside the range
270		    of the new set of allowed nodes (for example, node 5 is set in
271		    the user's nodemask when the set of allowed nodes is only 0-3),
272		    then the remap wraps around to the beginning of the nodemask and,
273		    if not already set, sets the node in the mempolicy nodemask.
275		    For example, consider a task that is attached to a cpuset with
276		    mems 2-5 that sets an Interleave policy over the same set with
277		    MPOL_F_RELATIVE_NODES.  If the cpuset's mems change to 3-7, the
278		    interleave now occurs over nodes 3,5-6.  If the cpuset's mems
279		    then change to 0,2-3,5, then the interleave occurs over nodes
280		    0,3,5.
282		    Thanks to the consistent remapping, applications preparing
283		    nodemasks to specify memory policies using this flag should
284		    disregard their current, actual cpuset imposed memory placement
285		    and prepare the nodemask as if they were always located on
286		    memory nodes 0 to N-1, where N is the number of memory nodes the
287		    policy is intended to manage.  Let the kernel then remap to the
288		    set of memory nodes allowed by the task's cpuset, as that may
289		    change over time.
291		    MPOL_F_RELATIVE_NODES cannot be combined with the
292		    MPOL_F_STATIC_NODES flag.  It also cannot be used for
293		    MPOL_PREFERRED policies that were created with an empty nodemask
294		    (local allocation).
298	To resolve use/free races, struct mempolicy contains an atomic reference
299	count field.  Internal interfaces, mpol_get()/mpol_put() increment and
300	decrement this reference count, respectively.  mpol_put() will only free
301	the structure back to the mempolicy kmem cache when the reference count
302	goes to zero.
304	When a new memory policy is allocated, its reference count is initialized
305	to '1', representing the reference held by the task that is installing the
306	new policy.  When a pointer to a memory policy structure is stored in another
307	structure, another reference is added, as the task's reference will be dropped
308	on completion of the policy installation.
310	During run-time "usage" of the policy, we attempt to minimize atomic operations
311	on the reference count, as this can lead to cache lines bouncing between cpus
312	and NUMA nodes.  "Usage" here means one of the following:
314	1) querying of the policy, either by the task itself [using the get_mempolicy()
315	   API discussed below] or by another task using the /proc/<pid>/numa_maps
316	   interface.
318	2) examination of the policy to determine the policy mode and associated node
319	   or node lists, if any, for page allocation.  This is considered a "hot
320	   path".  Note that for MPOL_BIND, the "usage" extends across the entire
321	   allocation process, which may sleep during page reclaimation, because the
322	   BIND policy nodemask is used, by reference, to filter ineligible nodes.
324	We can avoid taking an extra reference during the usages listed above as
325	follows:
327	1) we never need to get/free the system default policy as this is never
328	   changed nor freed, once the system is up and running.
330	2) for querying the policy, we do not need to take an extra reference on the
331	   target task's task policy nor vma policies because we always acquire the
332	   task's mm's mmap_sem for read during the query.  The set_mempolicy() and
333	   mbind() APIs [see below] always acquire the mmap_sem for write when
334	   installing or replacing task or vma policies.  Thus, there is no possibility
335	   of a task or thread freeing a policy while another task or thread is
336	   querying it.
338	3) Page allocation usage of task or vma policy occurs in the fault path where
339	   we hold them mmap_sem for read.  Again, because replacing the task or vma
340	   policy requires that the mmap_sem be held for write, the policy can't be
341	   freed out from under us while we're using it for page allocation.
343	4) Shared policies require special consideration.  One task can replace a
344	   shared memory policy while another task, with a distinct mmap_sem, is
345	   querying or allocating a page based on the policy.  To resolve this
346	   potential race, the shared policy infrastructure adds an extra reference
347	   to the shared policy during lookup while holding a spin lock on the shared
348	   policy management structure.  This requires that we drop this extra
349	   reference when we're finished "using" the policy.  We must drop the
350	   extra reference on shared policies in the same query/allocation paths
351	   used for non-shared policies.  For this reason, shared policies are marked
352	   as such, and the extra reference is dropped "conditionally"--i.e., only
353	   for shared policies.
355	   Because of this extra reference counting, and because we must lookup
356	   shared policies in a tree structure under spinlock, shared policies are
357	   more expensive to use in the page allocation path.  This is especially
358	   true for shared policies on shared memory regions shared by tasks running
359	   on different NUMA nodes.  This extra overhead can be avoided by always
360	   falling back to task or system default policy for shared memory regions,
361	   or by prefaulting the entire shared memory region into memory and locking
362	   it down.  However, this might not be appropriate for all applications.
366	Linux supports 3 system calls for controlling memory policy.  These APIS
367	always affect only the calling task, the calling task's address space, or
368	some shared object mapped into the calling task's address space.
370		Note:  the headers that define these APIs and the parameter data types
371		for user space applications reside in a package that is not part of
372		the Linux kernel.  The kernel system call interfaces, with the 'sys_'
373		prefix, are defined in <linux/syscalls.h>; the mode and flag
374		definitions are defined in <linux/mempolicy.h>.
376	Set [Task] Memory Policy:
378		long set_mempolicy(int mode, const unsigned long *nmask,
379						unsigned long maxnode);
381		Set's the calling task's "task/process memory policy" to mode
382		specified by the 'mode' argument and the set of nodes defined
383		by 'nmask'.  'nmask' points to a bit mask of node ids containing
384		at least 'maxnode' ids.  Optional mode flags may be passed by
385		combining the 'mode' argument with the flag (for example:
388		See the set_mempolicy(2) man page for more details
391	Get [Task] Memory Policy or Related Information
393		long get_mempolicy(int *mode,
394				   const unsigned long *nmask, unsigned long maxnode,
395				   void *addr, int flags);
397		Queries the "task/process memory policy" of the calling task, or
398		the policy or location of a specified virtual address, depending
399		on the 'flags' argument.
401		See the get_mempolicy(2) man page for more details
404	Install VMA/Shared Policy for a Range of Task's Address Space
406		long mbind(void *start, unsigned long len, int mode,
407			   const unsigned long *nmask, unsigned long maxnode,
408			   unsigned flags);
410		mbind() installs the policy specified by (mode, nmask, maxnodes) as
411		a VMA policy for the range of the calling task's address space
412		specified by the 'start' and 'len' arguments.  Additional actions
413		may be requested via the 'flags' argument.
415		See the mbind(2) man page for more details.
419	Although not strictly part of the Linux implementation of memory policy,
420	a command line tool, numactl(8), exists that allows one to:
422	+ set the task policy for a specified program via set_mempolicy(2), fork(2) and
423	  exec(2)
425	+ set the shared policy for a shared memory segment via mbind(2)
427	The numactl(8) tool is packaged with the run-time version of the library
428	containing the memory policy system call wrappers.  Some distributions
429	package the headers and compile-time libraries in a separate development
430	package.
435	Memory policies work within cpusets as described above.  For memory policies
436	that require a node or set of nodes, the nodes are restricted to the set of
437	nodes whose memories are allowed by the cpuset constraints.  If the nodemask
438	specified for the policy contains nodes that are not allowed by the cpuset and
439	MPOL_F_RELATIVE_NODES is not used, the intersection of the set of nodes
440	specified for the policy and the set of nodes with memory is used.  If the
441	result is the empty set, the policy is considered invalid and cannot be
442	installed.  If MPOL_F_RELATIVE_NODES is used, the policy's nodes are mapped
443	onto and folded into the task's set of allowed nodes as previously described.
445	The interaction of memory policies and cpusets can be problematic when tasks
446	in two cpusets share access to a memory region, such as shared memory segments
447	created by shmget() of mmap() with the MAP_ANONYMOUS and MAP_SHARED flags, and
448	any of the tasks install shared policy on the region, only nodes whose
449	memories are allowed in both cpusets may be used in the policies.  Obtaining
450	this information requires "stepping outside" the memory policy APIs to use the
451	cpuset information and requires that one know in what cpusets other task might
452	be attaching to the shared region.  Furthermore, if the cpusets' allowed
453	memory sets are disjoint, "local" allocation is the only valid policy.
Hide Line Numbers
About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Information is copyright its respective author. All material is available from the Linux Kernel Source distributed under a GPL License. This page is provided as a free service by mjmwired.net.