About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / memory-barriers.txt




Custom Search

Based on kernel version 4.0. Page generated on 2015-04-14 21:25 EST.

1				 ============================
2				 LINUX KERNEL MEMORY BARRIERS
3				 ============================
4	
5	By: David Howells <dhowells@redhat.com>
6	    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7	
8	Contents:
9	
10	 (*) Abstract memory access model.
11	
12	     - Device operations.
13	     - Guarantees.
14	
15	 (*) What are memory barriers?
16	
17	     - Varieties of memory barrier.
18	     - What may not be assumed about memory barriers?
19	     - Data dependency barriers.
20	     - Control dependencies.
21	     - SMP barrier pairing.
22	     - Examples of memory barrier sequences.
23	     - Read memory barriers vs load speculation.
24	     - Transitivity
25	
26	 (*) Explicit kernel barriers.
27	
28	     - Compiler barrier.
29	     - CPU memory barriers.
30	     - MMIO write barrier.
31	
32	 (*) Implicit kernel memory barriers.
33	
34	     - Locking functions.
35	     - Interrupt disabling functions.
36	     - Sleep and wake-up functions.
37	     - Miscellaneous functions.
38	
39	 (*) Inter-CPU locking barrier effects.
40	
41	     - Locks vs memory accesses.
42	     - Locks vs I/O accesses.
43	
44	 (*) Where are memory barriers needed?
45	
46	     - Interprocessor interaction.
47	     - Atomic operations.
48	     - Accessing devices.
49	     - Interrupts.
50	
51	 (*) Kernel I/O barrier effects.
52	
53	 (*) Assumed minimum execution ordering model.
54	
55	 (*) The effects of the cpu cache.
56	
57	     - Cache coherency.
58	     - Cache coherency vs DMA.
59	     - Cache coherency vs MMIO.
60	
61	 (*) The things CPUs get up to.
62	
63	     - And then there's the Alpha.
64	
65	 (*) Example uses.
66	
67	     - Circular buffers.
68	
69	 (*) References.
70	
71	
72	============================
73	ABSTRACT MEMORY ACCESS MODEL
74	============================
75	
76	Consider the following abstract model of the system:
77	
78			            :                :
79			            :                :
80			            :                :
81			+-------+   :   +--------+   :   +-------+
82			|       |   :   |        |   :   |       |
83			|       |   :   |        |   :   |       |
84			| CPU 1 |<----->| Memory |<----->| CPU 2 |
85			|       |   :   |        |   :   |       |
86			|       |   :   |        |   :   |       |
87			+-------+   :   +--------+   :   +-------+
88			    ^       :       ^        :       ^
89			    |       :       |        :       |
90			    |       :       |        :       |
91			    |       :       v        :       |
92			    |       :   +--------+   :       |
93			    |       :   |        |   :       |
94			    |       :   |        |   :       |
95			    +---------->| Device |<----------+
96			            :   |        |   :
97			            :   |        |   :
98			            :   +--------+   :
99			            :                :
100	
101	Each CPU executes a program that generates memory access operations.  In the
102	abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
103	perform the memory operations in any order it likes, provided program causality
104	appears to be maintained.  Similarly, the compiler may also arrange the
105	instructions it emits in any order it likes, provided it doesn't affect the
106	apparent operation of the program.
107	
108	So in the above diagram, the effects of the memory operations performed by a
109	CPU are perceived by the rest of the system as the operations cross the
110	interface between the CPU and rest of the system (the dotted lines).
111	
112	
113	For example, consider the following sequence of events:
114	
115		CPU 1		CPU 2
116		===============	===============
117		{ A == 1; B == 2 }
118		A = 3;		x = B;
119		B = 4;		y = A;
120	
121	The set of accesses as seen by the memory system in the middle can be arranged
122	in 24 different combinations:
123	
124		STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
125		STORE A=3,	STORE B=4,	x=LOAD B->4,	y=LOAD A->3
126		STORE A=3,	y=LOAD A->3,	STORE B=4,	x=LOAD B->4
127		STORE A=3,	y=LOAD A->3,	x=LOAD B->2,	STORE B=4
128		STORE A=3,	x=LOAD B->2,	STORE B=4,	y=LOAD A->3
129		STORE A=3,	x=LOAD B->2,	y=LOAD A->3,	STORE B=4
130		STORE B=4,	STORE A=3,	y=LOAD A->3,	x=LOAD B->4
131		STORE B=4, ...
132		...
133	
134	and can thus result in four different combinations of values:
135	
136		x == 2, y == 1
137		x == 2, y == 3
138		x == 4, y == 1
139		x == 4, y == 3
140	
141	
142	Furthermore, the stores committed by a CPU to the memory system may not be
143	perceived by the loads made by another CPU in the same order as the stores were
144	committed.
145	
146	
147	As a further example, consider this sequence of events:
148	
149		CPU 1		CPU 2
150		===============	===============
151		{ A == 1, B == 2, C = 3, P == &A, Q == &C }
152		B = 4;		Q = P;
153		P = &B		D = *Q;
154	
155	There is an obvious data dependency here, as the value loaded into D depends on
156	the address retrieved from P by CPU 2.  At the end of the sequence, any of the
157	following results are possible:
158	
159		(Q == &A) and (D == 1)
160		(Q == &B) and (D == 2)
161		(Q == &B) and (D == 4)
162	
163	Note that CPU 2 will never try and load C into D because the CPU will load P
164	into Q before issuing the load of *Q.
165	
166	
167	DEVICE OPERATIONS
168	-----------------
169	
170	Some devices present their control interfaces as collections of memory
171	locations, but the order in which the control registers are accessed is very
172	important.  For instance, imagine an ethernet card with a set of internal
173	registers that are accessed through an address port register (A) and a data
174	port register (D).  To read internal register 5, the following code might then
175	be used:
176	
177		*A = 5;
178		x = *D;
179	
180	but this might show up as either of the following two sequences:
181	
182		STORE *A = 5, x = LOAD *D
183		x = LOAD *D, STORE *A = 5
184	
185	the second of which will almost certainly result in a malfunction, since it set
186	the address _after_ attempting to read the register.
187	
188	
189	GUARANTEES
190	----------
191	
192	There are some minimal guarantees that may be expected of a CPU:
193	
194	 (*) On any given CPU, dependent memory accesses will be issued in order, with
195	     respect to itself.  This means that for:
196	
197		ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
198	
199	     the CPU will issue the following memory operations:
200	
201		Q = LOAD P, D = LOAD *Q
202	
203	     and always in that order.  On most systems, smp_read_barrier_depends()
204	     does nothing, but it is required for DEC Alpha.  The ACCESS_ONCE()
205	     is required to prevent compiler mischief.  Please note that you
206	     should normally use something like rcu_dereference() instead of
207	     open-coding smp_read_barrier_depends().
208	
209	 (*) Overlapping loads and stores within a particular CPU will appear to be
210	     ordered within that CPU.  This means that for:
211	
212		a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
213	
214	     the CPU will only issue the following sequence of memory operations:
215	
216		a = LOAD *X, STORE *X = b
217	
218	     And for:
219	
220		ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
221	
222	     the CPU will only issue:
223	
224		STORE *X = c, d = LOAD *X
225	
226	     (Loads and stores overlap if they are targeted at overlapping pieces of
227	     memory).
228	
229	And there are a number of things that _must_ or _must_not_ be assumed:
230	
231	 (*) It _must_not_ be assumed that the compiler will do what you want with
232	     memory references that are not protected by ACCESS_ONCE().  Without
233	     ACCESS_ONCE(), the compiler is within its rights to do all sorts
234	     of "creative" transformations, which are covered in the Compiler
235	     Barrier section.
236	
237	 (*) It _must_not_ be assumed that independent loads and stores will be issued
238	     in the order given.  This means that for:
239	
240		X = *A; Y = *B; *D = Z;
241	
242	     we may get any of the following sequences:
243	
244		X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
245		X = LOAD *A,  STORE *D = Z, Y = LOAD *B
246		Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
247		Y = LOAD *B,  STORE *D = Z, X = LOAD *A
248		STORE *D = Z, X = LOAD *A,  Y = LOAD *B
249		STORE *D = Z, Y = LOAD *B,  X = LOAD *A
250	
251	 (*) It _must_ be assumed that overlapping memory accesses may be merged or
252	     discarded.  This means that for:
253	
254		X = *A; Y = *(A + 4);
255	
256	     we may get any one of the following sequences:
257	
258		X = LOAD *A; Y = LOAD *(A + 4);
259		Y = LOAD *(A + 4); X = LOAD *A;
260		{X, Y} = LOAD {*A, *(A + 4) };
261	
262	     And for:
263	
264		*A = X; *(A + 4) = Y;
265	
266	     we may get any of:
267	
268		STORE *A = X; STORE *(A + 4) = Y;
269		STORE *(A + 4) = Y; STORE *A = X;
270		STORE {*A, *(A + 4) } = {X, Y};
271	
272	And there are anti-guarantees:
273	
274	 (*) These guarantees do not apply to bitfields, because compilers often
275	     generate code to modify these using non-atomic read-modify-write
276	     sequences.  Do not attempt to use bitfields to synchronize parallel
277	     algorithms.
278	
279	 (*) Even in cases where bitfields are protected by locks, all fields
280	     in a given bitfield must be protected by one lock.  If two fields
281	     in a given bitfield are protected by different locks, the compiler's
282	     non-atomic read-modify-write sequences can cause an update to one
283	     field to corrupt the value of an adjacent field.
284	
285	 (*) These guarantees apply only to properly aligned and sized scalar
286	     variables.  "Properly sized" currently means variables that are
287	     the same size as "char", "short", "int" and "long".  "Properly
288	     aligned" means the natural alignment, thus no constraints for
289	     "char", two-byte alignment for "short", four-byte alignment for
290	     "int", and either four-byte or eight-byte alignment for "long",
291	     on 32-bit and 64-bit systems, respectively.  Note that these
292	     guarantees were introduced into the C11 standard, so beware when
293	     using older pre-C11 compilers (for example, gcc 4.6).  The portion
294	     of the standard containing this guarantee is Section 3.14, which
295	     defines "memory location" as follows:
296	
297	     	memory location
298			either an object of scalar type, or a maximal sequence
299			of adjacent bit-fields all having nonzero width
300	
301			NOTE 1: Two threads of execution can update and access
302			separate memory locations without interfering with
303			each other.
304	
305			NOTE 2: A bit-field and an adjacent non-bit-field member
306			are in separate memory locations. The same applies
307			to two bit-fields, if one is declared inside a nested
308			structure declaration and the other is not, or if the two
309			are separated by a zero-length bit-field declaration,
310			or if they are separated by a non-bit-field member
311			declaration. It is not safe to concurrently update two
312			bit-fields in the same structure if all members declared
313			between them are also bit-fields, no matter what the
314			sizes of those intervening bit-fields happen to be.
315	
316	
317	=========================
318	WHAT ARE MEMORY BARRIERS?
319	=========================
320	
321	As can be seen above, independent memory operations are effectively performed
322	in random order, but this can be a problem for CPU-CPU interaction and for I/O.
323	What is required is some way of intervening to instruct the compiler and the
324	CPU to restrict the order.
325	
326	Memory barriers are such interventions.  They impose a perceived partial
327	ordering over the memory operations on either side of the barrier.
328	
329	Such enforcement is important because the CPUs and other devices in a system
330	can use a variety of tricks to improve performance, including reordering,
331	deferral and combination of memory operations; speculative loads; speculative
332	branch prediction and various types of caching.  Memory barriers are used to
333	override or suppress these tricks, allowing the code to sanely control the
334	interaction of multiple CPUs and/or devices.
335	
336	
337	VARIETIES OF MEMORY BARRIER
338	---------------------------
339	
340	Memory barriers come in four basic varieties:
341	
342	 (1) Write (or store) memory barriers.
343	
344	     A write memory barrier gives a guarantee that all the STORE operations
345	     specified before the barrier will appear to happen before all the STORE
346	     operations specified after the barrier with respect to the other
347	     components of the system.
348	
349	     A write barrier is a partial ordering on stores only; it is not required
350	     to have any effect on loads.
351	
352	     A CPU can be viewed as committing a sequence of store operations to the
353	     memory system as time progresses.  All stores before a write barrier will
354	     occur in the sequence _before_ all the stores after the write barrier.
355	
356	     [!] Note that write barriers should normally be paired with read or data
357	     dependency barriers; see the "SMP barrier pairing" subsection.
358	
359	
360	 (2) Data dependency barriers.
361	
362	     A data dependency barrier is a weaker form of read barrier.  In the case
363	     where two loads are performed such that the second depends on the result
364	     of the first (eg: the first load retrieves the address to which the second
365	     load will be directed), a data dependency barrier would be required to
366	     make sure that the target of the second load is updated before the address
367	     obtained by the first load is accessed.
368	
369	     A data dependency barrier is a partial ordering on interdependent loads
370	     only; it is not required to have any effect on stores, independent loads
371	     or overlapping loads.
372	
373	     As mentioned in (1), the other CPUs in the system can be viewed as
374	     committing sequences of stores to the memory system that the CPU being
375	     considered can then perceive.  A data dependency barrier issued by the CPU
376	     under consideration guarantees that for any load preceding it, if that
377	     load touches one of a sequence of stores from another CPU, then by the
378	     time the barrier completes, the effects of all the stores prior to that
379	     touched by the load will be perceptible to any loads issued after the data
380	     dependency barrier.
381	
382	     See the "Examples of memory barrier sequences" subsection for diagrams
383	     showing the ordering constraints.
384	
385	     [!] Note that the first load really has to have a _data_ dependency and
386	     not a control dependency.  If the address for the second load is dependent
387	     on the first load, but the dependency is through a conditional rather than
388	     actually loading the address itself, then it's a _control_ dependency and
389	     a full read barrier or better is required.  See the "Control dependencies"
390	     subsection for more information.
391	
392	     [!] Note that data dependency barriers should normally be paired with
393	     write barriers; see the "SMP barrier pairing" subsection.
394	
395	
396	 (3) Read (or load) memory barriers.
397	
398	     A read barrier is a data dependency barrier plus a guarantee that all the
399	     LOAD operations specified before the barrier will appear to happen before
400	     all the LOAD operations specified after the barrier with respect to the
401	     other components of the system.
402	
403	     A read barrier is a partial ordering on loads only; it is not required to
404	     have any effect on stores.
405	
406	     Read memory barriers imply data dependency barriers, and so can substitute
407	     for them.
408	
409	     [!] Note that read barriers should normally be paired with write barriers;
410	     see the "SMP barrier pairing" subsection.
411	
412	
413	 (4) General memory barriers.
414	
415	     A general memory barrier gives a guarantee that all the LOAD and STORE
416	     operations specified before the barrier will appear to happen before all
417	     the LOAD and STORE operations specified after the barrier with respect to
418	     the other components of the system.
419	
420	     A general memory barrier is a partial ordering over both loads and stores.
421	
422	     General memory barriers imply both read and write memory barriers, and so
423	     can substitute for either.
424	
425	
426	And a couple of implicit varieties:
427	
428	 (5) ACQUIRE operations.
429	
430	     This acts as a one-way permeable barrier.  It guarantees that all memory
431	     operations after the ACQUIRE operation will appear to happen after the
432	     ACQUIRE operation with respect to the other components of the system.
433	     ACQUIRE operations include LOCK operations and smp_load_acquire()
434	     operations.
435	
436	     Memory operations that occur before an ACQUIRE operation may appear to
437	     happen after it completes.
438	
439	     An ACQUIRE operation should almost always be paired with a RELEASE
440	     operation.
441	
442	
443	 (6) RELEASE operations.
444	
445	     This also acts as a one-way permeable barrier.  It guarantees that all
446	     memory operations before the RELEASE operation will appear to happen
447	     before the RELEASE operation with respect to the other components of the
448	     system. RELEASE operations include UNLOCK operations and
449	     smp_store_release() operations.
450	
451	     Memory operations that occur after a RELEASE operation may appear to
452	     happen before it completes.
453	
454	     The use of ACQUIRE and RELEASE operations generally precludes the need
455	     for other sorts of memory barrier (but note the exceptions mentioned in
456	     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
457	     pair is -not- guaranteed to act as a full memory barrier.  However, after
458	     an ACQUIRE on a given variable, all memory accesses preceding any prior
459	     RELEASE on that same variable are guaranteed to be visible.  In other
460	     words, within a given variable's critical section, all accesses of all
461	     previous critical sections for that variable are guaranteed to have
462	     completed.
463	
464	     This means that ACQUIRE acts as a minimal "acquire" operation and
465	     RELEASE acts as a minimal "release" operation.
466	
467	
468	Memory barriers are only required where there's a possibility of interaction
469	between two CPUs or between a CPU and a device.  If it can be guaranteed that
470	there won't be any such interaction in any particular piece of code, then
471	memory barriers are unnecessary in that piece of code.
472	
473	
474	Note that these are the _minimum_ guarantees.  Different architectures may give
475	more substantial guarantees, but they may _not_ be relied upon outside of arch
476	specific code.
477	
478	
479	WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
480	----------------------------------------------
481	
482	There are certain things that the Linux kernel memory barriers do not guarantee:
483	
484	 (*) There is no guarantee that any of the memory accesses specified before a
485	     memory barrier will be _complete_ by the completion of a memory barrier
486	     instruction; the barrier can be considered to draw a line in that CPU's
487	     access queue that accesses of the appropriate type may not cross.
488	
489	 (*) There is no guarantee that issuing a memory barrier on one CPU will have
490	     any direct effect on another CPU or any other hardware in the system.  The
491	     indirect effect will be the order in which the second CPU sees the effects
492	     of the first CPU's accesses occur, but see the next point:
493	
494	 (*) There is no guarantee that a CPU will see the correct order of effects
495	     from a second CPU's accesses, even _if_ the second CPU uses a memory
496	     barrier, unless the first CPU _also_ uses a matching memory barrier (see
497	     the subsection on "SMP Barrier Pairing").
498	
499	 (*) There is no guarantee that some intervening piece of off-the-CPU
500	     hardware[*] will not reorder the memory accesses.  CPU cache coherency
501	     mechanisms should propagate the indirect effects of a memory barrier
502	     between CPUs, but might not do so in order.
503	
504		[*] For information on bus mastering DMA and coherency please read:
505	
506		    Documentation/PCI/pci.txt
507		    Documentation/DMA-API-HOWTO.txt
508		    Documentation/DMA-API.txt
509	
510	
511	DATA DEPENDENCY BARRIERS
512	------------------------
513	
514	The usage requirements of data dependency barriers are a little subtle, and
515	it's not always obvious that they're needed.  To illustrate, consider the
516	following sequence of events:
517	
518		CPU 1		      CPU 2
519		===============	      ===============
520		{ A == 1, B == 2, C = 3, P == &A, Q == &C }
521		B = 4;
522		<write barrier>
523		ACCESS_ONCE(P) = &B
524				      Q = ACCESS_ONCE(P);
525				      D = *Q;
526	
527	There's a clear data dependency here, and it would seem that by the end of the
528	sequence, Q must be either &A or &B, and that:
529	
530		(Q == &A) implies (D == 1)
531		(Q == &B) implies (D == 4)
532	
533	But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
534	leading to the following situation:
535	
536		(Q == &B) and (D == 2) ????
537	
538	Whilst this may seem like a failure of coherency or causality maintenance, it
539	isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
540	Alpha).
541	
542	To deal with this, a data dependency barrier or better must be inserted
543	between the address load and the data load:
544	
545		CPU 1		      CPU 2
546		===============	      ===============
547		{ A == 1, B == 2, C = 3, P == &A, Q == &C }
548		B = 4;
549		<write barrier>
550		ACCESS_ONCE(P) = &B
551				      Q = ACCESS_ONCE(P);
552				      <data dependency barrier>
553				      D = *Q;
554	
555	This enforces the occurrence of one of the two implications, and prevents the
556	third possibility from arising.
557	
558	[!] Note that this extremely counterintuitive situation arises most easily on
559	machines with split caches, so that, for example, one cache bank processes
560	even-numbered cache lines and the other bank processes odd-numbered cache
561	lines.  The pointer P might be stored in an odd-numbered cache line, and the
562	variable B might be stored in an even-numbered cache line.  Then, if the
563	even-numbered bank of the reading CPU's cache is extremely busy while the
564	odd-numbered bank is idle, one can see the new value of the pointer P (&B),
565	but the old value of the variable B (2).
566	
567	
568	Another example of where data dependency barriers might be required is where a
569	number is read from memory and then used to calculate the index for an array
570	access:
571	
572		CPU 1		      CPU 2
573		===============	      ===============
574		{ M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
575		M[1] = 4;
576		<write barrier>
577		ACCESS_ONCE(P) = 1
578				      Q = ACCESS_ONCE(P);
579				      <data dependency barrier>
580				      D = M[Q];
581	
582	
583	The data dependency barrier is very important to the RCU system,
584	for example.  See rcu_assign_pointer() and rcu_dereference() in
585	include/linux/rcupdate.h.  This permits the current target of an RCU'd
586	pointer to be replaced with a new modified target, without the replacement
587	target appearing to be incompletely initialised.
588	
589	See also the subsection on "Cache Coherency" for a more thorough example.
590	
591	
592	CONTROL DEPENDENCIES
593	--------------------
594	
595	A control dependency requires a full read memory barrier, not simply a data
596	dependency barrier to make it work correctly.  Consider the following bit of
597	code:
598	
599		q = ACCESS_ONCE(a);
600		if (q) {
601			<data dependency barrier>  /* BUG: No data dependency!!! */
602			p = ACCESS_ONCE(b);
603		}
604	
605	This will not have the desired effect because there is no actual data
606	dependency, but rather a control dependency that the CPU may short-circuit
607	by attempting to predict the outcome in advance, so that other CPUs see
608	the load from b as having happened before the load from a.  In such a
609	case what's actually required is:
610	
611		q = ACCESS_ONCE(a);
612		if (q) {
613			<read barrier>
614			p = ACCESS_ONCE(b);
615		}
616	
617	However, stores are not speculated.  This means that ordering -is- provided
618	in the following example:
619	
620		q = ACCESS_ONCE(a);
621		if (q) {
622			ACCESS_ONCE(b) = p;
623		}
624	
625	Please note that ACCESS_ONCE() is not optional!  Without the
626	ACCESS_ONCE(), might combine the load from 'a' with other loads from
627	'a', and the store to 'b' with other stores to 'b', with possible highly
628	counterintuitive effects on ordering.
629	
630	Worse yet, if the compiler is able to prove (say) that the value of
631	variable 'a' is always non-zero, it would be well within its rights
632	to optimize the original example by eliminating the "if" statement
633	as follows:
634	
635		q = a;
636		b = p;  /* BUG: Compiler and CPU can both reorder!!! */
637	
638	So don't leave out the ACCESS_ONCE().
639	
640	It is tempting to try to enforce ordering on identical stores on both
641	branches of the "if" statement as follows:
642	
643		q = ACCESS_ONCE(a);
644		if (q) {
645			barrier();
646			ACCESS_ONCE(b) = p;
647			do_something();
648		} else {
649			barrier();
650			ACCESS_ONCE(b) = p;
651			do_something_else();
652		}
653	
654	Unfortunately, current compilers will transform this as follows at high
655	optimization levels:
656	
657		q = ACCESS_ONCE(a);
658		barrier();
659		ACCESS_ONCE(b) = p;  /* BUG: No ordering vs. load from a!!! */
660		if (q) {
661			/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
662			do_something();
663		} else {
664			/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
665			do_something_else();
666		}
667	
668	Now there is no conditional between the load from 'a' and the store to
669	'b', which means that the CPU is within its rights to reorder them:
670	The conditional is absolutely required, and must be present in the
671	assembly code even after all compiler optimizations have been applied.
672	Therefore, if you need ordering in this example, you need explicit
673	memory barriers, for example, smp_store_release():
674	
675		q = ACCESS_ONCE(a);
676		if (q) {
677			smp_store_release(&b, p);
678			do_something();
679		} else {
680			smp_store_release(&b, p);
681			do_something_else();
682		}
683	
684	In contrast, without explicit memory barriers, two-legged-if control
685	ordering is guaranteed only when the stores differ, for example:
686	
687		q = ACCESS_ONCE(a);
688		if (q) {
689			ACCESS_ONCE(b) = p;
690			do_something();
691		} else {
692			ACCESS_ONCE(b) = r;
693			do_something_else();
694		}
695	
696	The initial ACCESS_ONCE() is still required to prevent the compiler from
697	proving the value of 'a'.
698	
699	In addition, you need to be careful what you do with the local variable 'q',
700	otherwise the compiler might be able to guess the value and again remove
701	the needed conditional.  For example:
702	
703		q = ACCESS_ONCE(a);
704		if (q % MAX) {
705			ACCESS_ONCE(b) = p;
706			do_something();
707		} else {
708			ACCESS_ONCE(b) = r;
709			do_something_else();
710		}
711	
712	If MAX is defined to be 1, then the compiler knows that (q % MAX) is
713	equal to zero, in which case the compiler is within its rights to
714	transform the above code into the following:
715	
716		q = ACCESS_ONCE(a);
717		ACCESS_ONCE(b) = p;
718		do_something_else();
719	
720	Given this transformation, the CPU is not required to respect the ordering
721	between the load from variable 'a' and the store to variable 'b'.  It is
722	tempting to add a barrier(), but this does not help.  The conditional
723	is gone, and the barrier won't bring it back.  Therefore, if you are
724	relying on this ordering, you should make sure that MAX is greater than
725	one, perhaps as follows:
726	
727		q = ACCESS_ONCE(a);
728		BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
729		if (q % MAX) {
730			ACCESS_ONCE(b) = p;
731			do_something();
732		} else {
733			ACCESS_ONCE(b) = r;
734			do_something_else();
735		}
736	
737	Please note once again that the stores to 'b' differ.  If they were
738	identical, as noted earlier, the compiler could pull this store outside
739	of the 'if' statement.
740	
741	You must also be careful not to rely too much on boolean short-circuit
742	evaluation.  Consider this example:
743	
744		q = ACCESS_ONCE(a);
745		if (a || 1 > 0)
746			ACCESS_ONCE(b) = 1;
747	
748	Because the second condition is always true, the compiler can transform
749	this example as following, defeating control dependency:
750	
751		q = ACCESS_ONCE(a);
752		ACCESS_ONCE(b) = 1;
753	
754	This example underscores the need to ensure that the compiler cannot
755	out-guess your code.  More generally, although ACCESS_ONCE() does force
756	the compiler to actually emit code for a given load, it does not force
757	the compiler to use the results.
758	
759	Finally, control dependencies do -not- provide transitivity.  This is
760	demonstrated by two related examples, with the initial values of
761	x and y both being zero:
762	
763		CPU 0                     CPU 1
764		=====================     =====================
765		r1 = ACCESS_ONCE(x);      r2 = ACCESS_ONCE(y);
766		if (r1 > 0)               if (r2 > 0)
767		  ACCESS_ONCE(y) = 1;       ACCESS_ONCE(x) = 1;
768	
769		assert(!(r1 == 1 && r2 == 1));
770	
771	The above two-CPU example will never trigger the assert().  However,
772	if control dependencies guaranteed transitivity (which they do not),
773	then adding the following CPU would guarantee a related assertion:
774	
775		CPU 2
776		=====================
777		ACCESS_ONCE(x) = 2;
778	
779		assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
780	
781	But because control dependencies do -not- provide transitivity, the above
782	assertion can fail after the combined three-CPU example completes.  If you
783	need the three-CPU example to provide ordering, you will need smp_mb()
784	between the loads and stores in the CPU 0 and CPU 1 code fragments,
785	that is, just before or just after the "if" statements.
786	
787	These two examples are the LB and WWC litmus tests from this paper:
788	http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
789	site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
790	
791	In summary:
792	
793	  (*) Control dependencies can order prior loads against later stores.
794	      However, they do -not- guarantee any other sort of ordering:
795	      Not prior loads against later loads, nor prior stores against
796	      later anything.  If you need these other forms of ordering,
797	      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
798	      later loads, smp_mb().
799	
800	  (*) If both legs of the "if" statement begin with identical stores
801	      to the same variable, a barrier() statement is required at the
802	      beginning of each leg of the "if" statement.
803	
804	  (*) Control dependencies require at least one run-time conditional
805	      between the prior load and the subsequent store, and this
806	      conditional must involve the prior load.  If the compiler
807	      is able to optimize the conditional away, it will have also
808	      optimized away the ordering.  Careful use of ACCESS_ONCE() can
809	      help to preserve the needed conditional.
810	
811	  (*) Control dependencies require that the compiler avoid reordering the
812	      dependency into nonexistence.  Careful use of ACCESS_ONCE() or
813	      barrier() can help to preserve your control dependency.  Please
814	      see the Compiler Barrier section for more information.
815	
816	  (*) Control dependencies do -not- provide transitivity.  If you
817	      need transitivity, use smp_mb().
818	
819	
820	SMP BARRIER PAIRING
821	-------------------
822	
823	When dealing with CPU-CPU interactions, certain types of memory barrier should
824	always be paired.  A lack of appropriate pairing is almost certainly an error.
825	
826	General barriers pair with each other, though they also pair with
827	most other types of barriers, albeit without transitivity.  An acquire
828	barrier pairs with a release barrier, but both may also pair with other
829	barriers, including of course general barriers.  A write barrier pairs
830	with a data dependency barrier, an acquire barrier, a release barrier,
831	a read barrier, or a general barrier.  Similarly a read barrier or a
832	data dependency barrier pairs with a write barrier, an acquire barrier,
833	a release barrier, or a general barrier:
834	
835		CPU 1		      CPU 2
836		===============	      ===============
837		ACCESS_ONCE(a) = 1;
838		<write barrier>
839		ACCESS_ONCE(b) = 2;   x = ACCESS_ONCE(b);
840				      <read barrier>
841				      y = ACCESS_ONCE(a);
842	
843	Or:
844	
845		CPU 1		      CPU 2
846		===============	      ===============================
847		a = 1;
848		<write barrier>
849		ACCESS_ONCE(b) = &a;  x = ACCESS_ONCE(b);
850				      <data dependency barrier>
851				      y = *x;
852	
853	Basically, the read barrier always has to be there, even though it can be of
854	the "weaker" type.
855	
856	[!] Note that the stores before the write barrier would normally be expected to
857	match the loads after the read barrier or the data dependency barrier, and vice
858	versa:
859	
860		CPU 1                               CPU 2
861		===================                 ===================
862		ACCESS_ONCE(a) = 1;  }----   --->{  v = ACCESS_ONCE(c);
863		ACCESS_ONCE(b) = 2;  }    \ /    {  w = ACCESS_ONCE(d);
864		<write barrier>            \        <read barrier>
865		ACCESS_ONCE(c) = 3;  }    / \    {  x = ACCESS_ONCE(a);
866		ACCESS_ONCE(d) = 4;  }----   --->{  y = ACCESS_ONCE(b);
867	
868	
869	EXAMPLES OF MEMORY BARRIER SEQUENCES
870	------------------------------------
871	
872	Firstly, write barriers act as partial orderings on store operations.
873	Consider the following sequence of events:
874	
875		CPU 1
876		=======================
877		STORE A = 1
878		STORE B = 2
879		STORE C = 3
880		<write barrier>
881		STORE D = 4
882		STORE E = 5
883	
884	This sequence of events is committed to the memory coherence system in an order
885	that the rest of the system might perceive as the unordered set of { STORE A,
886	STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
887	}:
888	
889		+-------+       :      :
890		|       |       +------+
891		|       |------>| C=3  |     }     /\
892		|       |  :    +------+     }-----  \  -----> Events perceptible to
893		|       |  :    | A=1  |     }        \/       the rest of the system
894		|       |  :    +------+     }
895		| CPU 1 |  :    | B=2  |     }
896		|       |       +------+     }
897		|       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
898		|       |       +------+     }        requires all stores prior to the
899		|       |  :    | E=5  |     }        barrier to be committed before
900		|       |  :    +------+     }        further stores may take place
901		|       |------>| D=4  |     }
902		|       |       +------+
903		+-------+       :      :
904		                   |
905		                   | Sequence in which stores are committed to the
906		                   | memory system by CPU 1
907		                   V
908	
909	
910	Secondly, data dependency barriers act as partial orderings on data-dependent
911	loads.  Consider the following sequence of events:
912	
913		CPU 1			CPU 2
914		=======================	=======================
915			{ B = 7; X = 9; Y = 8; C = &Y }
916		STORE A = 1
917		STORE B = 2
918		<write barrier>
919		STORE C = &B		LOAD X
920		STORE D = 4		LOAD C (gets &B)
921					LOAD *C (reads B)
922	
923	Without intervention, CPU 2 may perceive the events on CPU 1 in some
924	effectively random order, despite the write barrier issued by CPU 1:
925	
926		+-------+       :      :                :       :
927		|       |       +------+                +-------+  | Sequence of update
928		|       |------>| B=2  |-----       --->| Y->8  |  | of perception on
929		|       |  :    +------+     \          +-------+  | CPU 2
930		| CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
931		|       |       +------+       |        +-------+
932		|       |   wwwwwwwwwwwwwwww   |        :       :
933		|       |       +------+       |        :       :
934		|       |  :    | C=&B |---    |        :       :       +-------+
935		|       |  :    +------+   \   |        +-------+       |       |
936		|       |------>| D=4  |    ----------->| C->&B |------>|       |
937		|       |       +------+       |        +-------+       |       |
938		+-------+       :      :       |        :       :       |       |
939		                               |        :       :       |       |
940		                               |        :       :       | CPU 2 |
941		                               |        +-------+       |       |
942		    Apparently incorrect --->  |        | B->7  |------>|       |
943		    perception of B (!)        |        +-------+       |       |
944		                               |        :       :       |       |
945		                               |        +-------+       |       |
946		    The load of X holds --->    \       | X->9  |------>|       |
947		    up the maintenance           \      +-------+       |       |
948		    of coherence of B             ----->| B->2  |       +-------+
949		                                        +-------+
950		                                        :       :
951	
952	
953	In the above example, CPU 2 perceives that B is 7, despite the load of *C
954	(which would be B) coming after the LOAD of C.
955	
956	If, however, a data dependency barrier were to be placed between the load of C
957	and the load of *C (ie: B) on CPU 2:
958	
959		CPU 1			CPU 2
960		=======================	=======================
961			{ B = 7; X = 9; Y = 8; C = &Y }
962		STORE A = 1
963		STORE B = 2
964		<write barrier>
965		STORE C = &B		LOAD X
966		STORE D = 4		LOAD C (gets &B)
967					<data dependency barrier>
968					LOAD *C (reads B)
969	
970	then the following will occur:
971	
972		+-------+       :      :                :       :
973		|       |       +------+                +-------+
974		|       |------>| B=2  |-----       --->| Y->8  |
975		|       |  :    +------+     \          +-------+
976		| CPU 1 |  :    | A=1  |      \     --->| C->&Y |
977		|       |       +------+       |        +-------+
978		|       |   wwwwwwwwwwwwwwww   |        :       :
979		|       |       +------+       |        :       :
980		|       |  :    | C=&B |---    |        :       :       +-------+
981		|       |  :    +------+   \   |        +-------+       |       |
982		|       |------>| D=4  |    ----------->| C->&B |------>|       |
983		|       |       +------+       |        +-------+       |       |
984		+-------+       :      :       |        :       :       |       |
985		                               |        :       :       |       |
986		                               |        :       :       | CPU 2 |
987		                               |        +-------+       |       |
988		                               |        | X->9  |------>|       |
989		                               |        +-------+       |       |
990		  Makes sure all effects --->   \   ddddddddddddddddd   |       |
991		  prior to the store of C        \      +-------+       |       |
992		  are perceptible to              ----->| B->2  |------>|       |
993		  subsequent loads                      +-------+       |       |
994		                                        :       :       +-------+
995	
996	
997	And thirdly, a read barrier acts as a partial order on loads.  Consider the
998	following sequence of events:
999	
1000		CPU 1			CPU 2
1001		=======================	=======================
1002			{ A = 0, B = 9 }
1003		STORE A=1
1004		<write barrier>
1005		STORE B=2
1006					LOAD B
1007					LOAD A
1008	
1009	Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1010	some effectively random order, despite the write barrier issued by CPU 1:
1011	
1012		+-------+       :      :                :       :
1013		|       |       +------+                +-------+
1014		|       |------>| A=1  |------      --->| A->0  |
1015		|       |       +------+      \         +-------+
1016		| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1017		|       |       +------+        |       +-------+
1018		|       |------>| B=2  |---     |       :       :
1019		|       |       +------+   \    |       :       :       +-------+
1020		+-------+       :      :    \   |       +-------+       |       |
1021		                             ---------->| B->2  |------>|       |
1022		                                |       +-------+       | CPU 2 |
1023		                                |       | A->0  |------>|       |
1024		                                |       +-------+       |       |
1025		                                |       :       :       +-------+
1026		                                 \      :       :
1027		                                  \     +-------+
1028		                                   ---->| A->1  |
1029		                                        +-------+
1030		                                        :       :
1031	
1032	
1033	If, however, a read barrier were to be placed between the load of B and the
1034	load of A on CPU 2:
1035	
1036		CPU 1			CPU 2
1037		=======================	=======================
1038			{ A = 0, B = 9 }
1039		STORE A=1
1040		<write barrier>
1041		STORE B=2
1042					LOAD B
1043					<read barrier>
1044					LOAD A
1045	
1046	then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
1047	2:
1048	
1049		+-------+       :      :                :       :
1050		|       |       +------+                +-------+
1051		|       |------>| A=1  |------      --->| A->0  |
1052		|       |       +------+      \         +-------+
1053		| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1054		|       |       +------+        |       +-------+
1055		|       |------>| B=2  |---     |       :       :
1056		|       |       +------+   \    |       :       :       +-------+
1057		+-------+       :      :    \   |       +-------+       |       |
1058		                             ---------->| B->2  |------>|       |
1059		                                |       +-------+       | CPU 2 |
1060		                                |       :       :       |       |
1061		                                |       :       :       |       |
1062		  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1063		  barrier causes all effects      \     +-------+       |       |
1064		  prior to the storage of B        ---->| A->1  |------>|       |
1065		  to be perceptible to CPU 2            +-------+       |       |
1066		                                        :       :       +-------+
1067	
1068	
1069	To illustrate this more completely, consider what could happen if the code
1070	contained a load of A either side of the read barrier:
1071	
1072		CPU 1			CPU 2
1073		=======================	=======================
1074			{ A = 0, B = 9 }
1075		STORE A=1
1076		<write barrier>
1077		STORE B=2
1078					LOAD B
1079					LOAD A [first load of A]
1080					<read barrier>
1081					LOAD A [second load of A]
1082	
1083	Even though the two loads of A both occur after the load of B, they may both
1084	come up with different values:
1085	
1086		+-------+       :      :                :       :
1087		|       |       +------+                +-------+
1088		|       |------>| A=1  |------      --->| A->0  |
1089		|       |       +------+      \         +-------+
1090		| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1091		|       |       +------+        |       +-------+
1092		|       |------>| B=2  |---     |       :       :
1093		|       |       +------+   \    |       :       :       +-------+
1094		+-------+       :      :    \   |       +-------+       |       |
1095		                             ---------->| B->2  |------>|       |
1096		                                |       +-------+       | CPU 2 |
1097		                                |       :       :       |       |
1098		                                |       :       :       |       |
1099		                                |       +-------+       |       |
1100		                                |       | A->0  |------>| 1st   |
1101		                                |       +-------+       |       |
1102		  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1103		  barrier causes all effects      \     +-------+       |       |
1104		  prior to the storage of B        ---->| A->1  |------>| 2nd   |
1105		  to be perceptible to CPU 2            +-------+       |       |
1106		                                        :       :       +-------+
1107	
1108	
1109	But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1110	before the read barrier completes anyway:
1111	
1112		+-------+       :      :                :       :
1113		|       |       +------+                +-------+
1114		|       |------>| A=1  |------      --->| A->0  |
1115		|       |       +------+      \         +-------+
1116		| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1117		|       |       +------+        |       +-------+
1118		|       |------>| B=2  |---     |       :       :
1119		|       |       +------+   \    |       :       :       +-------+
1120		+-------+       :      :    \   |       +-------+       |       |
1121		                             ---------->| B->2  |------>|       |
1122		                                |       +-------+       | CPU 2 |
1123		                                |       :       :       |       |
1124		                                 \      :       :       |       |
1125		                                  \     +-------+       |       |
1126		                                   ---->| A->1  |------>| 1st   |
1127		                                        +-------+       |       |
1128		                                    rrrrrrrrrrrrrrrrr   |       |
1129		                                        +-------+       |       |
1130		                                        | A->1  |------>| 2nd   |
1131		                                        +-------+       |       |
1132		                                        :       :       +-------+
1133	
1134	
1135	The guarantee is that the second load will always come up with A == 1 if the
1136	load of B came up with B == 2.  No such guarantee exists for the first load of
1137	A; that may come up with either A == 0 or A == 1.
1138	
1139	
1140	READ MEMORY BARRIERS VS LOAD SPECULATION
1141	----------------------------------------
1142	
1143	Many CPUs speculate with loads: that is they see that they will need to load an
1144	item from memory, and they find a time where they're not using the bus for any
1145	other loads, and so do the load in advance - even though they haven't actually
1146	got to that point in the instruction execution flow yet.  This permits the
1147	actual load instruction to potentially complete immediately because the CPU
1148	already has the value to hand.
1149	
1150	It may turn out that the CPU didn't actually need the value - perhaps because a
1151	branch circumvented the load - in which case it can discard the value or just
1152	cache it for later use.
1153	
1154	Consider:
1155	
1156		CPU 1			CPU 2
1157		=======================	=======================
1158					LOAD B
1159					DIVIDE		} Divide instructions generally
1160					DIVIDE		} take a long time to perform
1161					LOAD A
1162	
1163	Which might appear as this:
1164	
1165		                                        :       :       +-------+
1166		                                        +-------+       |       |
1167		                                    --->| B->2  |------>|       |
1168		                                        +-------+       | CPU 2 |
1169		                                        :       :DIVIDE |       |
1170		                                        +-------+       |       |
1171		The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1172		division speculates on the              +-------+   ~   |       |
1173		LOAD of A                               :       :   ~   |       |
1174		                                        :       :DIVIDE |       |
1175		                                        :       :   ~   |       |
1176		Once the divisions are complete -->     :       :   ~-->|       |
1177		the CPU can then perform the            :       :       |       |
1178		LOAD with immediate effect              :       :       +-------+
1179	
1180	
1181	Placing a read barrier or a data dependency barrier just before the second
1182	load:
1183	
1184		CPU 1			CPU 2
1185		=======================	=======================
1186					LOAD B
1187					DIVIDE
1188					DIVIDE
1189					<read barrier>
1190					LOAD A
1191	
1192	will force any value speculatively obtained to be reconsidered to an extent
1193	dependent on the type of barrier used.  If there was no change made to the
1194	speculated memory location, then the speculated value will just be used:
1195	
1196		                                        :       :       +-------+
1197		                                        +-------+       |       |
1198		                                    --->| B->2  |------>|       |
1199		                                        +-------+       | CPU 2 |
1200		                                        :       :DIVIDE |       |
1201		                                        +-------+       |       |
1202		The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1203		division speculates on the              +-------+   ~   |       |
1204		LOAD of A                               :       :   ~   |       |
1205		                                        :       :DIVIDE |       |
1206		                                        :       :   ~   |       |
1207		                                        :       :   ~   |       |
1208		                                    rrrrrrrrrrrrrrrr~   |       |
1209		                                        :       :   ~   |       |
1210		                                        :       :   ~-->|       |
1211		                                        :       :       |       |
1212		                                        :       :       +-------+
1213	
1214	
1215	but if there was an update or an invalidation from another CPU pending, then
1216	the speculation will be cancelled and the value reloaded:
1217	
1218		                                        :       :       +-------+
1219		                                        +-------+       |       |
1220		                                    --->| B->2  |------>|       |
1221		                                        +-------+       | CPU 2 |
1222		                                        :       :DIVIDE |       |
1223		                                        +-------+       |       |
1224		The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1225		division speculates on the              +-------+   ~   |       |
1226		LOAD of A                               :       :   ~   |       |
1227		                                        :       :DIVIDE |       |
1228		                                        :       :   ~   |       |
1229		                                        :       :   ~   |       |
1230		                                    rrrrrrrrrrrrrrrrr   |       |
1231		                                        +-------+       |       |
1232		The speculation is discarded --->   --->| A->1  |------>|       |
1233		and an updated value is                 +-------+       |       |
1234		retrieved                               :       :       +-------+
1235	
1236	
1237	TRANSITIVITY
1238	------------
1239	
1240	Transitivity is a deeply intuitive notion about ordering that is not
1241	always provided by real computer systems.  The following example
1242	demonstrates transitivity (also called "cumulativity"):
1243	
1244		CPU 1			CPU 2			CPU 3
1245		=======================	=======================	=======================
1246			{ X = 0, Y = 0 }
1247		STORE X=1		LOAD X			STORE Y=1
1248					<general barrier>	<general barrier>
1249					LOAD Y			LOAD X
1250	
1251	Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1252	This indicates that CPU 2's load from X in some sense follows CPU 1's
1253	store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1254	store to Y.  The question is then "Can CPU 3's load from X return 0?"
1255	
1256	Because CPU 2's load from X in some sense came after CPU 1's store, it
1257	is natural to expect that CPU 3's load from X must therefore return 1.
1258	This expectation is an example of transitivity: if a load executing on
1259	CPU A follows a load from the same variable executing on CPU B, then
1260	CPU A's load must either return the same value that CPU B's load did,
1261	or must return some later value.
1262	
1263	In the Linux kernel, use of general memory barriers guarantees
1264	transitivity.  Therefore, in the above example, if CPU 2's load from X
1265	returns 1 and its load from Y returns 0, then CPU 3's load from X must
1266	also return 1.
1267	
1268	However, transitivity is -not- guaranteed for read or write barriers.
1269	For example, suppose that CPU 2's general barrier in the above example
1270	is changed to a read barrier as shown below:
1271	
1272		CPU 1			CPU 2			CPU 3
1273		=======================	=======================	=======================
1274			{ X = 0, Y = 0 }
1275		STORE X=1		LOAD X			STORE Y=1
1276					<read barrier>		<general barrier>
1277					LOAD Y			LOAD X
1278	
1279	This substitution destroys transitivity: in this example, it is perfectly
1280	legal for CPU 2's load from X to return 1, its load from Y to return 0,
1281	and CPU 3's load from X to return 0.
1282	
1283	The key point is that although CPU 2's read barrier orders its pair
1284	of loads, it does not guarantee to order CPU 1's store.  Therefore, if
1285	this example runs on a system where CPUs 1 and 2 share a store buffer
1286	or a level of cache, CPU 2 might have early access to CPU 1's writes.
1287	General barriers are therefore required to ensure that all CPUs agree
1288	on the combined order of CPU 1's and CPU 2's accesses.
1289	
1290	To reiterate, if your code requires transitivity, use general barriers
1291	throughout.
1292	
1293	
1294	========================
1295	EXPLICIT KERNEL BARRIERS
1296	========================
1297	
1298	The Linux kernel has a variety of different barriers that act at different
1299	levels:
1300	
1301	  (*) Compiler barrier.
1302	
1303	  (*) CPU memory barriers.
1304	
1305	  (*) MMIO write barrier.
1306	
1307	
1308	COMPILER BARRIER
1309	----------------
1310	
1311	The Linux kernel has an explicit compiler barrier function that prevents the
1312	compiler from moving the memory accesses either side of it to the other side:
1313	
1314		barrier();
1315	
1316	This is a general barrier -- there are no read-read or write-write variants
1317	of barrier().  However, ACCESS_ONCE() can be thought of as a weak form
1318	for barrier() that affects only the specific accesses flagged by the
1319	ACCESS_ONCE().
1320	
1321	The barrier() function has the following effects:
1322	
1323	 (*) Prevents the compiler from reordering accesses following the
1324	     barrier() to precede any accesses preceding the barrier().
1325	     One example use for this property is to ease communication between
1326	     interrupt-handler code and the code that was interrupted.
1327	
1328	 (*) Within a loop, forces the compiler to load the variables used
1329	     in that loop's conditional on each pass through that loop.
1330	
1331	The ACCESS_ONCE() function can prevent any number of optimizations that,
1332	while perfectly safe in single-threaded code, can be fatal in concurrent
1333	code.  Here are some examples of these sorts of optimizations:
1334	
1335	 (*) The compiler is within its rights to reorder loads and stores
1336	     to the same variable, and in some cases, the CPU is within its
1337	     rights to reorder loads to the same variable.  This means that
1338	     the following code:
1339	
1340		a[0] = x;
1341		a[1] = x;
1342	
1343	     Might result in an older value of x stored in a[1] than in a[0].
1344	     Prevent both the compiler and the CPU from doing this as follows:
1345	
1346		a[0] = ACCESS_ONCE(x);
1347		a[1] = ACCESS_ONCE(x);
1348	
1349	     In short, ACCESS_ONCE() provides cache coherence for accesses from
1350	     multiple CPUs to a single variable.
1351	
1352	 (*) The compiler is within its rights to merge successive loads from
1353	     the same variable.  Such merging can cause the compiler to "optimize"
1354	     the following code:
1355	
1356		while (tmp = a)
1357			do_something_with(tmp);
1358	
1359	     into the following code, which, although in some sense legitimate
1360	     for single-threaded code, is almost certainly not what the developer
1361	     intended:
1362	
1363		if (tmp = a)
1364			for (;;)
1365				do_something_with(tmp);
1366	
1367	     Use ACCESS_ONCE() to prevent the compiler from doing this to you:
1368	
1369		while (tmp = ACCESS_ONCE(a))
1370			do_something_with(tmp);
1371	
1372	 (*) The compiler is within its rights to reload a variable, for example,
1373	     in cases where high register pressure prevents the compiler from
1374	     keeping all data of interest in registers.  The compiler might
1375	     therefore optimize the variable 'tmp' out of our previous example:
1376	
1377		while (tmp = a)
1378			do_something_with(tmp);
1379	
1380	     This could result in the following code, which is perfectly safe in
1381	     single-threaded code, but can be fatal in concurrent code:
1382	
1383		while (a)
1384			do_something_with(a);
1385	
1386	     For example, the optimized version of this code could result in
1387	     passing a zero to do_something_with() in the case where the variable
1388	     a was modified by some other CPU between the "while" statement and
1389	     the call to do_something_with().
1390	
1391	     Again, use ACCESS_ONCE() to prevent the compiler from doing this:
1392	
1393		while (tmp = ACCESS_ONCE(a))
1394			do_something_with(tmp);
1395	
1396	     Note that if the compiler runs short of registers, it might save
1397	     tmp onto the stack.  The overhead of this saving and later restoring
1398	     is why compilers reload variables.  Doing so is perfectly safe for
1399	     single-threaded code, so you need to tell the compiler about cases
1400	     where it is not safe.
1401	
1402	 (*) The compiler is within its rights to omit a load entirely if it knows
1403	     what the value will be.  For example, if the compiler can prove that
1404	     the value of variable 'a' is always zero, it can optimize this code:
1405	
1406		while (tmp = a)
1407			do_something_with(tmp);
1408	
1409	     Into this:
1410	
1411		do { } while (0);
1412	
1413	     This transformation is a win for single-threaded code because it gets
1414	     rid of a load and a branch.  The problem is that the compiler will
1415	     carry out its proof assuming that the current CPU is the only one
1416	     updating variable 'a'.  If variable 'a' is shared, then the compiler's
1417	     proof will be erroneous.  Use ACCESS_ONCE() to tell the compiler
1418	     that it doesn't know as much as it thinks it does:
1419	
1420		while (tmp = ACCESS_ONCE(a))
1421			do_something_with(tmp);
1422	
1423	     But please note that the compiler is also closely watching what you
1424	     do with the value after the ACCESS_ONCE().  For example, suppose you
1425	     do the following and MAX is a preprocessor macro with the value 1:
1426	
1427		while ((tmp = ACCESS_ONCE(a)) % MAX)
1428			do_something_with(tmp);
1429	
1430	     Then the compiler knows that the result of the "%" operator applied
1431	     to MAX will always be zero, again allowing the compiler to optimize
1432	     the code into near-nonexistence.  (It will still load from the
1433	     variable 'a'.)
1434	
1435	 (*) Similarly, the compiler is within its rights to omit a store entirely
1436	     if it knows that the variable already has the value being stored.
1437	     Again, the compiler assumes that the current CPU is the only one
1438	     storing into the variable, which can cause the compiler to do the
1439	     wrong thing for shared variables.  For example, suppose you have
1440	     the following:
1441	
1442		a = 0;
1443		/* Code that does not store to variable a. */
1444		a = 0;
1445	
1446	     The compiler sees that the value of variable 'a' is already zero, so
1447	     it might well omit the second store.  This would come as a fatal
1448	     surprise if some other CPU might have stored to variable 'a' in the
1449	     meantime.
1450	
1451	     Use ACCESS_ONCE() to prevent the compiler from making this sort of
1452	     wrong guess:
1453	
1454		ACCESS_ONCE(a) = 0;
1455		/* Code that does not store to variable a. */
1456		ACCESS_ONCE(a) = 0;
1457	
1458	 (*) The compiler is within its rights to reorder memory accesses unless
1459	     you tell it not to.  For example, consider the following interaction
1460	     between process-level code and an interrupt handler:
1461	
1462		void process_level(void)
1463		{
1464			msg = get_message();
1465			flag = true;
1466		}
1467	
1468		void interrupt_handler(void)
1469		{
1470			if (flag)
1471				process_message(msg);
1472		}
1473	
1474	     There is nothing to prevent the compiler from transforming
1475	     process_level() to the following, in fact, this might well be a
1476	     win for single-threaded code:
1477	
1478		void process_level(void)
1479		{
1480			flag = true;
1481			msg = get_message();
1482		}
1483	
1484	     If the interrupt occurs between these two statement, then
1485	     interrupt_handler() might be passed a garbled msg.  Use ACCESS_ONCE()
1486	     to prevent this as follows:
1487	
1488		void process_level(void)
1489		{
1490			ACCESS_ONCE(msg) = get_message();
1491			ACCESS_ONCE(flag) = true;
1492		}
1493	
1494		void interrupt_handler(void)
1495		{
1496			if (ACCESS_ONCE(flag))
1497				process_message(ACCESS_ONCE(msg));
1498		}
1499	
1500	     Note that the ACCESS_ONCE() wrappers in interrupt_handler()
1501	     are needed if this interrupt handler can itself be interrupted
1502	     by something that also accesses 'flag' and 'msg', for example,
1503	     a nested interrupt or an NMI.  Otherwise, ACCESS_ONCE() is not
1504	     needed in interrupt_handler() other than for documentation purposes.
1505	     (Note also that nested interrupts do not typically occur in modern
1506	     Linux kernels, in fact, if an interrupt handler returns with
1507	     interrupts enabled, you will get a WARN_ONCE() splat.)
1508	
1509	     You should assume that the compiler can move ACCESS_ONCE() past
1510	     code not containing ACCESS_ONCE(), barrier(), or similar primitives.
1511	
1512	     This effect could also be achieved using barrier(), but ACCESS_ONCE()
1513	     is more selective:  With ACCESS_ONCE(), the compiler need only forget
1514	     the contents of the indicated memory locations, while with barrier()
1515	     the compiler must discard the value of all memory locations that
1516	     it has currented cached in any machine registers.  Of course,
1517	     the compiler must also respect the order in which the ACCESS_ONCE()s
1518	     occur, though the CPU of course need not do so.
1519	
1520	 (*) The compiler is within its rights to invent stores to a variable,
1521	     as in the following example:
1522	
1523		if (a)
1524			b = a;
1525		else
1526			b = 42;
1527	
1528	     The compiler might save a branch by optimizing this as follows:
1529	
1530		b = 42;
1531		if (a)
1532			b = a;
1533	
1534	     In single-threaded code, this is not only safe, but also saves
1535	     a branch.  Unfortunately, in concurrent code, this optimization
1536	     could cause some other CPU to see a spurious value of 42 -- even
1537	     if variable 'a' was never zero -- when loading variable 'b'.
1538	     Use ACCESS_ONCE() to prevent this as follows:
1539	
1540		if (a)
1541			ACCESS_ONCE(b) = a;
1542		else
1543			ACCESS_ONCE(b) = 42;
1544	
1545	     The compiler can also invent loads.  These are usually less
1546	     damaging, but they can result in cache-line bouncing and thus in
1547	     poor performance and scalability.  Use ACCESS_ONCE() to prevent
1548	     invented loads.
1549	
1550	 (*) For aligned memory locations whose size allows them to be accessed
1551	     with a single memory-reference instruction, prevents "load tearing"
1552	     and "store tearing," in which a single large access is replaced by
1553	     multiple smaller accesses.  For example, given an architecture having
1554	     16-bit store instructions with 7-bit immediate fields, the compiler
1555	     might be tempted to use two 16-bit store-immediate instructions to
1556	     implement the following 32-bit store:
1557	
1558		p = 0x00010002;
1559	
1560	     Please note that GCC really does use this sort of optimization,
1561	     which is not surprising given that it would likely take more
1562	     than two instructions to build the constant and then store it.
1563	     This optimization can therefore be a win in single-threaded code.
1564	     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1565	     this optimization in a volatile store.  In the absence of such bugs,
1566	     use of ACCESS_ONCE() prevents store tearing in the following example:
1567	
1568		ACCESS_ONCE(p) = 0x00010002;
1569	
1570	     Use of packed structures can also result in load and store tearing,
1571	     as in this example:
1572	
1573		struct __attribute__((__packed__)) foo {
1574			short a;
1575			int b;
1576			short c;
1577		};
1578		struct foo foo1, foo2;
1579		...
1580	
1581		foo2.a = foo1.a;
1582		foo2.b = foo1.b;
1583		foo2.c = foo1.c;
1584	
1585	     Because there are no ACCESS_ONCE() wrappers and no volatile markings,
1586	     the compiler would be well within its rights to implement these three
1587	     assignment statements as a pair of 32-bit loads followed by a pair
1588	     of 32-bit stores.  This would result in load tearing on 'foo1.b'
1589	     and store tearing on 'foo2.b'.  ACCESS_ONCE() again prevents tearing
1590	     in this example:
1591	
1592		foo2.a = foo1.a;
1593		ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b);
1594		foo2.c = foo1.c;
1595	
1596	All that aside, it is never necessary to use ACCESS_ONCE() on a variable
1597	that has been marked volatile.  For example, because 'jiffies' is marked
1598	volatile, it is never necessary to say ACCESS_ONCE(jiffies).  The reason
1599	for this is that ACCESS_ONCE() is implemented as a volatile cast, which
1600	has no effect when its argument is already marked volatile.
1601	
1602	Please note that these compiler barriers have no direct effect on the CPU,
1603	which may then reorder things however it wishes.
1604	
1605	
1606	CPU MEMORY BARRIERS
1607	-------------------
1608	
1609	The Linux kernel has eight basic CPU memory barriers:
1610	
1611		TYPE		MANDATORY		SMP CONDITIONAL
1612		===============	=======================	===========================
1613		GENERAL		mb()			smp_mb()
1614		WRITE		wmb()			smp_wmb()
1615		READ		rmb()			smp_rmb()
1616		DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
1617	
1618	
1619	All memory barriers except the data dependency barriers imply a compiler
1620	barrier. Data dependencies do not impose any additional compiler ordering.
1621	
1622	Aside: In the case of data dependencies, the compiler would be expected to
1623	issue the loads in the correct order (eg. `a[b]` would have to load the value
1624	of b before loading a[b]), however there is no guarantee in the C specification
1625	that the compiler may not speculate the value of b (eg. is equal to 1) and load
1626	a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
1627	problem of a compiler reloading b after having loaded a[b], thus having a newer
1628	copy of b than a[b]. A consensus has not yet been reached about these problems,
1629	however the ACCESS_ONCE macro is a good place to start looking.
1630	
1631	SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1632	systems because it is assumed that a CPU will appear to be self-consistent,
1633	and will order overlapping accesses correctly with respect to itself.
1634	
1635	[!] Note that SMP memory barriers _must_ be used to control the ordering of
1636	references to shared memory on SMP systems, though the use of locking instead
1637	is sufficient.
1638	
1639	Mandatory barriers should not be used to control SMP effects, since mandatory
1640	barriers unnecessarily impose overhead on UP systems. They may, however, be
1641	used to control MMIO effects on accesses through relaxed memory I/O windows.
1642	These are required even on non-SMP systems as they affect the order in which
1643	memory operations appear to a device by prohibiting both the compiler and the
1644	CPU from reordering them.
1645	
1646	
1647	There are some more advanced barrier functions:
1648	
1649	 (*) set_mb(var, value)
1650	
1651	     This assigns the value to the variable and then inserts a full memory
1652	     barrier after it, depending on the function.  It isn't guaranteed to
1653	     insert anything more than a compiler barrier in a UP compilation.
1654	
1655	
1656	 (*) smp_mb__before_atomic();
1657	 (*) smp_mb__after_atomic();
1658	
1659	     These are for use with atomic (such as add, subtract, increment and
1660	     decrement) functions that don't return a value, especially when used for
1661	     reference counting.  These functions do not imply memory barriers.
1662	
1663	     These are also used for atomic bitop functions that do not return a
1664	     value (such as set_bit and clear_bit).
1665	
1666	     As an example, consider a piece of code that marks an object as being dead
1667	     and then decrements the object's reference count:
1668	
1669		obj->dead = 1;
1670		smp_mb__before_atomic();
1671		atomic_dec(&obj->ref_count);
1672	
1673	     This makes sure that the death mark on the object is perceived to be set
1674	     *before* the reference counter is decremented.
1675	
1676	     See Documentation/atomic_ops.txt for more information.  See the "Atomic
1677	     operations" subsection for information on where to use these.
1678	
1679	
1680	 (*) dma_wmb();
1681	 (*) dma_rmb();
1682	
1683	     These are for use with consistent memory to guarantee the ordering
1684	     of writes or reads of shared memory accessible to both the CPU and a
1685	     DMA capable device.
1686	
1687	     For example, consider a device driver that shares memory with a device
1688	     and uses a descriptor status value to indicate if the descriptor belongs
1689	     to the device or the CPU, and a doorbell to notify it when new
1690	     descriptors are available:
1691	
1692		if (desc->status != DEVICE_OWN) {
1693			/* do not read data until we own descriptor */
1694			dma_rmb();
1695	
1696			/* read/modify data */
1697			read_data = desc->data;
1698			desc->data = write_data;
1699	
1700			/* flush modifications before status update */
1701			dma_wmb();
1702	
1703			/* assign ownership */
1704			desc->status = DEVICE_OWN;
1705	
1706			/* force memory to sync before notifying device via MMIO */
1707			wmb();
1708	
1709			/* notify device of new descriptors */
1710			writel(DESC_NOTIFY, doorbell);
1711		}
1712	
1713	     The dma_rmb() allows us guarantee the device has released ownership
1714	     before we read the data from the descriptor, and he dma_wmb() allows
1715	     us to guarantee the data is written to the descriptor before the device
1716	     can see it now has ownership.  The wmb() is needed to guarantee that the
1717	     cache coherent memory writes have completed before attempting a write to
1718	     the cache incoherent MMIO region.
1719	
1720	     See Documentation/DMA-API.txt for more information on consistent memory.
1721	
1722	MMIO WRITE BARRIER
1723	------------------
1724	
1725	The Linux kernel also has a special barrier for use with memory-mapped I/O
1726	writes:
1727	
1728		mmiowb();
1729	
1730	This is a variation on the mandatory write barrier that causes writes to weakly
1731	ordered I/O regions to be partially ordered.  Its effects may go beyond the
1732	CPU->Hardware interface and actually affect the hardware at some level.
1733	
1734	See the subsection "Locks vs I/O accesses" for more information.
1735	
1736	
1737	===============================
1738	IMPLICIT KERNEL MEMORY BARRIERS
1739	===============================
1740	
1741	Some of the other functions in the linux kernel imply memory barriers, amongst
1742	which are locking and scheduling functions.
1743	
1744	This specification is a _minimum_ guarantee; any particular architecture may
1745	provide more substantial guarantees, but these may not be relied upon outside
1746	of arch specific code.
1747	
1748	
1749	ACQUIRING FUNCTIONS
1750	-------------------
1751	
1752	The Linux kernel has a number of locking constructs:
1753	
1754	 (*) spin locks
1755	 (*) R/W spin locks
1756	 (*) mutexes
1757	 (*) semaphores
1758	 (*) R/W semaphores
1759	 (*) RCU
1760	
1761	In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1762	for each construct.  These operations all imply certain barriers:
1763	
1764	 (1) ACQUIRE operation implication:
1765	
1766	     Memory operations issued after the ACQUIRE will be completed after the
1767	     ACQUIRE operation has completed.
1768	
1769	     Memory operations issued before the ACQUIRE may be completed after
1770	     the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
1771	     combined with a following ACQUIRE, orders prior loads against
1772	     subsequent loads and stores and also orders prior stores against
1773	     subsequent stores.  Note that this is weaker than smp_mb()!  The
1774	     smp_mb__before_spinlock() primitive is free on many architectures.
1775	
1776	 (2) RELEASE operation implication:
1777	
1778	     Memory operations issued before the RELEASE will be completed before the
1779	     RELEASE operation has completed.
1780	
1781	     Memory operations issued after the RELEASE may be completed before the
1782	     RELEASE operation has completed.
1783	
1784	 (3) ACQUIRE vs ACQUIRE implication:
1785	
1786	     All ACQUIRE operations issued before another ACQUIRE operation will be
1787	     completed before that ACQUIRE operation.
1788	
1789	 (4) ACQUIRE vs RELEASE implication:
1790	
1791	     All ACQUIRE operations issued before a RELEASE operation will be
1792	     completed before the RELEASE operation.
1793	
1794	 (5) Failed conditional ACQUIRE implication:
1795	
1796	     Certain locking variants of the ACQUIRE operation may fail, either due to
1797	     being unable to get the lock immediately, or due to receiving an unblocked
1798	     signal whilst asleep waiting for the lock to become available.  Failed
1799	     locks do not imply any sort of barrier.
1800	
1801	[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
1802	one-way barriers is that the effects of instructions outside of a critical
1803	section may seep into the inside of the critical section.
1804	
1805	An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
1806	because it is possible for an access preceding the ACQUIRE to happen after the
1807	ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
1808	the two accesses can themselves then cross:
1809	
1810		*A = a;
1811		ACQUIRE M
1812		RELEASE M
1813		*B = b;
1814	
1815	may occur as:
1816	
1817		ACQUIRE M, STORE *B, STORE *A, RELEASE M
1818	
1819	When the ACQUIRE and RELEASE are a lock acquisition and release,
1820	respectively, this same reordering can occur if the lock's ACQUIRE and
1821	RELEASE are to the same lock variable, but only from the perspective of
1822	another CPU not holding that lock.  In short, a ACQUIRE followed by an
1823	RELEASE may -not- be assumed to be a full memory barrier.
1824	
1825	Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not
1826	imply a full memory barrier.  If it is necessary for a RELEASE-ACQUIRE
1827	pair to produce a full barrier, the ACQUIRE can be followed by an
1828	smp_mb__after_unlock_lock() invocation.  This will produce a full barrier
1829	if either (a) the RELEASE and the ACQUIRE are executed by the same
1830	CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable.
1831	The smp_mb__after_unlock_lock() primitive is free on many architectures.
1832	Without smp_mb__after_unlock_lock(), the CPU's execution of the critical
1833	sections corresponding to the RELEASE and the ACQUIRE can cross, so that:
1834	
1835		*A = a;
1836		RELEASE M
1837		ACQUIRE N
1838		*B = b;
1839	
1840	could occur as:
1841	
1842		ACQUIRE N, STORE *B, STORE *A, RELEASE M
1843	
1844	It might appear that this reordering could introduce a deadlock.
1845	However, this cannot happen because if such a deadlock threatened,
1846	the RELEASE would simply complete, thereby avoiding the deadlock.
1847	
1848		Why does this work?
1849	
1850		One key point is that we are only talking about the CPU doing
1851		the reordering, not the compiler.  If the compiler (or, for
1852		that matter, the developer) switched the operations, deadlock
1853		-could- occur.
1854	
1855		But suppose the CPU reordered the operations.  In this case,
1856		the unlock precedes the lock in the assembly code.  The CPU
1857		simply elected to try executing the later lock operation first.
1858		If there is a deadlock, this lock operation will simply spin (or
1859		try to sleep, but more on that later).	The CPU will eventually
1860		execute the unlock operation (which preceded the lock operation
1861		in the assembly code), which will unravel the potential deadlock,
1862		allowing the lock operation to succeed.
1863	
1864		But what if the lock is a sleeplock?  In that case, the code will
1865		try to enter the scheduler, where it will eventually encounter
1866		a memory barrier, which will force the earlier unlock operation
1867		to complete, again unraveling the deadlock.  There might be
1868		a sleep-unlock race, but the locking primitive needs to resolve
1869		such races properly in any case.
1870	
1871	With smp_mb__after_unlock_lock(), the two critical sections cannot overlap.
1872	For example, with the following code, the store to *A will always be
1873	seen by other CPUs before the store to *B:
1874	
1875		*A = a;
1876		RELEASE M
1877		ACQUIRE N
1878		smp_mb__after_unlock_lock();
1879		*B = b;
1880	
1881	The operations will always occur in one of the following orders:
1882	
1883		STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B
1884		STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1885		ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1886	
1887	If the RELEASE and ACQUIRE were instead both operating on the same lock
1888	variable, only the first of these alternatives can occur.  In addition,
1889	the more strongly ordered systems may rule out some of the above orders.
1890	But in any case, as noted earlier, the smp_mb__after_unlock_lock()
1891	ensures that the store to *A will always be seen as happening before
1892	the store to *B.
1893	
1894	Locks and semaphores may not provide any guarantee of ordering on UP compiled
1895	systems, and so cannot be counted on in such a situation to actually achieve
1896	anything at all - especially with respect to I/O accesses - unless combined
1897	with interrupt disabling operations.
1898	
1899	See also the section on "Inter-CPU locking barrier effects".
1900	
1901	
1902	As an example, consider the following:
1903	
1904		*A = a;
1905		*B = b;
1906		ACQUIRE
1907		*C = c;
1908		*D = d;
1909		RELEASE
1910		*E = e;
1911		*F = f;
1912	
1913	The following sequence of events is acceptable:
1914	
1915		ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
1916	
1917		[+] Note that {*F,*A} indicates a combined access.
1918	
1919	But none of the following are:
1920	
1921		{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
1922		*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
1923		*A, *B,		ACQUIRE, *C,		RELEASE, *D, *E, *F
1924		*B,		ACQUIRE, *C, *D,	RELEASE, {*F,*A}, *E
1925	
1926	
1927	
1928	INTERRUPT DISABLING FUNCTIONS
1929	-----------------------------
1930	
1931	Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
1932	(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
1933	barriers are required in such a situation, they must be provided from some
1934	other means.
1935	
1936	
1937	SLEEP AND WAKE-UP FUNCTIONS
1938	---------------------------
1939	
1940	Sleeping and waking on an event flagged in global data can be viewed as an
1941	interaction between two pieces of data: the task state of the task waiting for
1942	the event and the global data used to indicate the event.  To make sure that
1943	these appear to happen in the right order, the primitives to begin the process
1944	of going to sleep, and the primitives to initiate a wake up imply certain
1945	barriers.
1946	
1947	Firstly, the sleeper normally follows something like this sequence of events:
1948	
1949		for (;;) {
1950			set_current_state(TASK_UNINTERRUPTIBLE);
1951			if (event_indicated)
1952				break;
1953			schedule();
1954		}
1955	
1956	A general memory barrier is interpolated automatically by set_current_state()
1957	after it has altered the task state:
1958	
1959		CPU 1
1960		===============================
1961		set_current_state();
1962		  set_mb();
1963		    STORE current->state
1964		    <general barrier>
1965		LOAD event_indicated
1966	
1967	set_current_state() may be wrapped by:
1968	
1969		prepare_to_wait();
1970		prepare_to_wait_exclusive();
1971	
1972	which therefore also imply a general memory barrier after setting the state.
1973	The whole sequence above is available in various canned forms, all of which
1974	interpolate the memory barrier in the right place:
1975	
1976		wait_event();
1977		wait_event_interruptible();
1978		wait_event_interruptible_exclusive();
1979		wait_event_interruptible_timeout();
1980		wait_event_killable();
1981		wait_event_timeout();
1982		wait_on_bit();
1983		wait_on_bit_lock();
1984	
1985	
1986	Secondly, code that performs a wake up normally follows something like this:
1987	
1988		event_indicated = 1;
1989		wake_up(&event_wait_queue);
1990	
1991	or:
1992	
1993		event_indicated = 1;
1994		wake_up_process(event_daemon);
1995	
1996	A write memory barrier is implied by wake_up() and co. if and only if they wake
1997	something up.  The barrier occurs before the task state is cleared, and so sits
1998	between the STORE to indicate the event and the STORE to set TASK_RUNNING:
1999	
2000		CPU 1				CPU 2
2001		===============================	===============================
2002		set_current_state();		STORE event_indicated
2003		  set_mb();			wake_up();
2004		    STORE current->state	  <write barrier>
2005		    <general barrier>		  STORE current->state
2006		LOAD event_indicated
2007	
2008	To repeat, this write memory barrier is present if and only if something
2009	is actually awakened.  To see this, consider the following sequence of
2010	events, where X and Y are both initially zero:
2011	
2012		CPU 1				CPU 2
2013		===============================	===============================
2014		X = 1;				STORE event_indicated
2015		smp_mb();			wake_up();
2016		Y = 1;				wait_event(wq, Y == 1);
2017		wake_up();			  load from Y sees 1, no memory barrier
2018						load from X might see 0
2019	
2020	In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2021	to see 1.
2022	
2023	The available waker functions include:
2024	
2025		complete();
2026		wake_up();
2027		wake_up_all();
2028		wake_up_bit();
2029		wake_up_interruptible();
2030		wake_up_interruptible_all();
2031		wake_up_interruptible_nr();
2032		wake_up_interruptible_poll();
2033		wake_up_interruptible_sync();
2034		wake_up_interruptible_sync_poll();
2035		wake_up_locked();
2036		wake_up_locked_poll();
2037		wake_up_nr();
2038		wake_up_poll();
2039		wake_up_process();
2040	
2041	
2042	[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2043	order multiple stores before the wake-up with respect to loads of those stored
2044	values after the sleeper has called set_current_state().  For instance, if the
2045	sleeper does:
2046	
2047		set_current_state(TASK_INTERRUPTIBLE);
2048		if (event_indicated)
2049			break;
2050		__set_current_state(TASK_RUNNING);
2051		do_something(my_data);
2052	
2053	and the waker does:
2054	
2055		my_data = value;
2056		event_indicated = 1;
2057		wake_up(&event_wait_queue);
2058	
2059	there's no guarantee that the change to event_indicated will be perceived by
2060	the sleeper as coming after the change to my_data.  In such a circumstance, the
2061	code on both sides must interpolate its own memory barriers between the
2062	separate data accesses.  Thus the above sleeper ought to do:
2063	
2064		set_current_state(TASK_INTERRUPTIBLE);
2065		if (event_indicated) {
2066			smp_rmb();
2067			do_something(my_data);
2068		}
2069	
2070	and the waker should do:
2071	
2072		my_data = value;
2073		smp_wmb();
2074		event_indicated = 1;
2075		wake_up(&event_wait_queue);
2076	
2077	
2078	MISCELLANEOUS FUNCTIONS
2079	-----------------------
2080	
2081	Other functions that imply barriers:
2082	
2083	 (*) schedule() and similar imply full memory barriers.
2084	
2085	
2086	===================================
2087	INTER-CPU ACQUIRING BARRIER EFFECTS
2088	===================================
2089	
2090	On SMP systems locking primitives give a more substantial form of barrier: one
2091	that does affect memory access ordering on other CPUs, within the context of
2092	conflict on any particular lock.
2093	
2094	
2095	ACQUIRES VS MEMORY ACCESSES
2096	---------------------------
2097	
2098	Consider the following: the system has a pair of spinlocks (M) and (Q), and
2099	three CPUs; then should the following sequence of events occur:
2100	
2101		CPU 1				CPU 2
2102		===============================	===============================
2103		ACCESS_ONCE(*A) = a;		ACCESS_ONCE(*E) = e;
2104		ACQUIRE M			ACQUIRE Q
2105		ACCESS_ONCE(*B) = b;		ACCESS_ONCE(*F) = f;
2106		ACCESS_ONCE(*C) = c;		ACCESS_ONCE(*G) = g;
2107		RELEASE M			RELEASE Q
2108		ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*H) = h;
2109	
2110	Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2111	through *H occur in, other than the constraints imposed by the separate locks
2112	on the separate CPUs. It might, for example, see:
2113	
2114		*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2115	
2116	But it won't see any of:
2117	
2118		*B, *C or *D preceding ACQUIRE M
2119		*A, *B or *C following RELEASE M
2120		*F, *G or *H preceding ACQUIRE Q
2121		*E, *F or *G following RELEASE Q
2122	
2123	
2124	However, if the following occurs:
2125	
2126		CPU 1				CPU 2
2127		===============================	===============================
2128		ACCESS_ONCE(*A) = a;
2129		ACQUIRE M		     [1]
2130		ACCESS_ONCE(*B) = b;
2131		ACCESS_ONCE(*C) = c;
2132		RELEASE M	     [1]
2133		ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*E) = e;
2134						ACQUIRE M		     [2]
2135						smp_mb__after_unlock_lock();
2136						ACCESS_ONCE(*F) = f;
2137						ACCESS_ONCE(*G) = g;
2138						RELEASE M	     [2]
2139						ACCESS_ONCE(*H) = h;
2140	
2141	CPU 3 might see:
2142	
2143		*E, ACQUIRE M [1], *C, *B, *A, RELEASE M [1],
2144			ACQUIRE M [2], *H, *F, *G, RELEASE M [2], *D
2145	
2146	But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
2147	
2148		*B, *C, *D, *F, *G or *H preceding ACQUIRE M [1]
2149		*A, *B or *C following RELEASE M [1]
2150		*F, *G or *H preceding ACQUIRE M [2]
2151		*A, *B, *C, *E, *F or *G following RELEASE M [2]
2152	
2153	Note that the smp_mb__after_unlock_lock() is critically important
2154	here: Without it CPU 3 might see some of the above orderings.
2155	Without smp_mb__after_unlock_lock(), the accesses are not guaranteed
2156	to be seen in order unless CPU 3 holds lock M.
2157	
2158	
2159	ACQUIRES VS I/O ACCESSES
2160	------------------------
2161	
2162	Under certain circumstances (especially involving NUMA), I/O accesses within
2163	two spinlocked sections on two different CPUs may be seen as interleaved by the
2164	PCI bridge, because the PCI bridge does not necessarily participate in the
2165	cache-coherence protocol, and is therefore incapable of issuing the required
2166	read memory barriers.
2167	
2168	For example:
2169	
2170		CPU 1				CPU 2
2171		===============================	===============================
2172		spin_lock(Q)
2173		writel(0, ADDR)
2174		writel(1, DATA);
2175		spin_unlock(Q);
2176						spin_lock(Q);
2177						writel(4, ADDR);
2178						writel(5, DATA);
2179						spin_unlock(Q);
2180	
2181	may be seen by the PCI bridge as follows:
2182	
2183		STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2184	
2185	which would probably cause the hardware to malfunction.
2186	
2187	
2188	What is necessary here is to intervene with an mmiowb() before dropping the
2189	spinlock, for example:
2190	
2191		CPU 1				CPU 2
2192		===============================	===============================
2193		spin_lock(Q)
2194		writel(0, ADDR)
2195		writel(1, DATA);
2196		mmiowb();
2197		spin_unlock(Q);
2198						spin_lock(Q);
2199						writel(4, ADDR);
2200						writel(5, DATA);
2201						mmiowb();
2202						spin_unlock(Q);
2203	
2204	this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2205	before either of the stores issued on CPU 2.
2206	
2207	
2208	Furthermore, following a store by a load from the same device obviates the need
2209	for the mmiowb(), because the load forces the store to complete before the load
2210	is performed:
2211	
2212		CPU 1				CPU 2
2213		===============================	===============================
2214		spin_lock(Q)
2215		writel(0, ADDR)
2216		a = readl(DATA);
2217		spin_unlock(Q);
2218						spin_lock(Q);
2219						writel(4, ADDR);
2220						b = readl(DATA);
2221						spin_unlock(Q);
2222	
2223	
2224	See Documentation/DocBook/deviceiobook.tmpl for more information.
2225	
2226	
2227	=================================
2228	WHERE ARE MEMORY BARRIERS NEEDED?
2229	=================================
2230	
2231	Under normal operation, memory operation reordering is generally not going to
2232	be a problem as a single-threaded linear piece of code will still appear to
2233	work correctly, even if it's in an SMP kernel.  There are, however, four
2234	circumstances in which reordering definitely _could_ be a problem:
2235	
2236	 (*) Interprocessor interaction.
2237	
2238	 (*) Atomic operations.
2239	
2240	 (*) Accessing devices.
2241	
2242	 (*) Interrupts.
2243	
2244	
2245	INTERPROCESSOR INTERACTION
2246	--------------------------
2247	
2248	When there's a system with more than one processor, more than one CPU in the
2249	system may be working on the same data set at the same time.  This can cause
2250	synchronisation problems, and the usual way of dealing with them is to use
2251	locks.  Locks, however, are quite expensive, and so it may be preferable to
2252	operate without the use of a lock if at all possible.  In such a case
2253	operations that affect both CPUs may have to be carefully ordered to prevent
2254	a malfunction.
2255	
2256	Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2257	queued on the semaphore, by virtue of it having a piece of its stack linked to
2258	the semaphore's list of waiting processes:
2259	
2260		struct rw_semaphore {
2261			...
2262			spinlock_t lock;
2263			struct list_head waiters;
2264		};
2265	
2266		struct rwsem_waiter {
2267			struct list_head list;
2268			struct task_struct *task;
2269		};
2270	
2271	To wake up a particular waiter, the up_read() or up_write() functions have to:
2272	
2273	 (1) read the next pointer from this waiter's record to know as to where the
2274	     next waiter record is;
2275	
2276	 (2) read the pointer to the waiter's task structure;
2277	
2278	 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2279	
2280	 (4) call wake_up_process() on the task; and
2281	
2282	 (5) release the reference held on the waiter's task struct.
2283	
2284	In other words, it has to perform this sequence of events:
2285	
2286		LOAD waiter->list.next;
2287		LOAD waiter->task;
2288		STORE waiter->task;
2289		CALL wakeup
2290		RELEASE task
2291	
2292	and if any of these steps occur out of order, then the whole thing may
2293	malfunction.
2294	
2295	Once it has queued itself and dropped the semaphore lock, the waiter does not
2296	get the lock again; it instead just waits for its task pointer to be cleared
2297	before proceeding.  Since the record is on the waiter's stack, this means that
2298	if the task pointer is cleared _before_ the next pointer in the list is read,
2299	another CPU might start processing the waiter and might clobber the waiter's
2300	stack before the up*() function has a chance to read the next pointer.
2301	
2302	Consider then what might happen to the above sequence of events:
2303	
2304		CPU 1				CPU 2
2305		===============================	===============================
2306						down_xxx()
2307						Queue waiter
2308						Sleep
2309		up_yyy()
2310		LOAD waiter->task;
2311		STORE waiter->task;
2312						Woken up by other event
2313		<preempt>
2314						Resume processing
2315						down_xxx() returns
2316						call foo()
2317						foo() clobbers *waiter
2318		</preempt>
2319		LOAD waiter->list.next;
2320		--- OOPS ---
2321	
2322	This could be dealt with using the semaphore lock, but then the down_xxx()
2323	function has to needlessly get the spinlock again after being woken up.
2324	
2325	The way to deal with this is to insert a general SMP memory barrier:
2326	
2327		LOAD waiter->list.next;
2328		LOAD waiter->task;
2329		smp_mb();
2330		STORE waiter->task;
2331		CALL wakeup
2332		RELEASE task
2333	
2334	In this case, the barrier makes a guarantee that all memory accesses before the
2335	barrier will appear to happen before all the memory accesses after the barrier
2336	with respect to the other CPUs on the system.  It does _not_ guarantee that all
2337	the memory accesses before the barrier will be complete by the time the barrier
2338	instruction itself is complete.
2339	
2340	On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2341	compiler barrier, thus making sure the compiler emits the instructions in the
2342	right order without actually intervening in the CPU.  Since there's only one
2343	CPU, that CPU's dependency ordering logic will take care of everything else.
2344	
2345	
2346	ATOMIC OPERATIONS
2347	-----------------
2348	
2349	Whilst they are technically interprocessor interaction considerations, atomic
2350	operations are noted specially as some of them imply full memory barriers and
2351	some don't, but they're very heavily relied on as a group throughout the
2352	kernel.
2353	
2354	Any atomic operation that modifies some state in memory and returns information
2355	about the state (old or new) implies an SMP-conditional general memory barrier
2356	(smp_mb()) on each side of the actual operation (with the exception of
2357	explicit lock operations, described later).  These include:
2358	
2359		xchg();
2360		cmpxchg();
2361		atomic_xchg();			atomic_long_xchg();
2362		atomic_cmpxchg();		atomic_long_cmpxchg();
2363		atomic_inc_return();		atomic_long_inc_return();
2364		atomic_dec_return();		atomic_long_dec_return();
2365		atomic_add_return();		atomic_long_add_return();
2366		atomic_sub_return();		atomic_long_sub_return();
2367		atomic_inc_and_test();		atomic_long_inc_and_test();
2368		atomic_dec_and_test();		atomic_long_dec_and_test();
2369		atomic_sub_and_test();		atomic_long_sub_and_test();
2370		atomic_add_negative();		atomic_long_add_negative();
2371		test_and_set_bit();
2372		test_and_clear_bit();
2373		test_and_change_bit();
2374	
2375		/* when succeeds (returns 1) */
2376		atomic_add_unless();		atomic_long_add_unless();
2377	
2378	These are used for such things as implementing ACQUIRE-class and RELEASE-class
2379	operations and adjusting reference counters towards object destruction, and as
2380	such the implicit memory barrier effects are necessary.
2381	
2382	
2383	The following operations are potential problems as they do _not_ imply memory
2384	barriers, but might be used for implementing such things as RELEASE-class
2385	operations:
2386	
2387		atomic_set();
2388		set_bit();
2389		clear_bit();
2390		change_bit();
2391	
2392	With these the appropriate explicit memory barrier should be used if necessary
2393	(smp_mb__before_atomic() for instance).
2394	
2395	
2396	The following also do _not_ imply memory barriers, and so may require explicit
2397	memory barriers under some circumstances (smp_mb__before_atomic() for
2398	instance):
2399	
2400		atomic_add();
2401		atomic_sub();
2402		atomic_inc();
2403		atomic_dec();
2404	
2405	If they're used for statistics generation, then they probably don't need memory
2406	barriers, unless there's a coupling between statistical data.
2407	
2408	If they're used for reference counting on an object to control its lifetime,
2409	they probably don't need memory barriers because either the reference count
2410	will be adjusted inside a locked section, or the caller will already hold
2411	sufficient references to make the lock, and thus a memory barrier unnecessary.
2412	
2413	If they're used for constructing a lock of some description, then they probably
2414	do need memory barriers as a lock primitive generally has to do things in a
2415	specific order.
2416	
2417	Basically, each usage case has to be carefully considered as to whether memory
2418	barriers are needed or not.
2419	
2420	The following operations are special locking primitives:
2421	
2422		test_and_set_bit_lock();
2423		clear_bit_unlock();
2424		__clear_bit_unlock();
2425	
2426	These implement ACQUIRE-class and RELEASE-class operations. These should be used in
2427	preference to other operations when implementing locking primitives, because
2428	their implementations can be optimised on many architectures.
2429	
2430	[!] Note that special memory barrier primitives are available for these
2431	situations because on some CPUs the atomic instructions used imply full memory
2432	barriers, and so barrier instructions are superfluous in conjunction with them,
2433	and in such cases the special barrier primitives will be no-ops.
2434	
2435	See Documentation/atomic_ops.txt for more information.
2436	
2437	
2438	ACCESSING DEVICES
2439	-----------------
2440	
2441	Many devices can be memory mapped, and so appear to the CPU as if they're just
2442	a set of memory locations.  To control such a device, the driver usually has to
2443	make the right memory accesses in exactly the right order.
2444	
2445	However, having a clever CPU or a clever compiler creates a potential problem
2446	in that the carefully sequenced accesses in the driver code won't reach the
2447	device in the requisite order if the CPU or the compiler thinks it is more
2448	efficient to reorder, combine or merge accesses - something that would cause
2449	the device to malfunction.
2450	
2451	Inside of the Linux kernel, I/O should be done through the appropriate accessor
2452	routines - such as inb() or writel() - which know how to make such accesses
2453	appropriately sequential.  Whilst this, for the most part, renders the explicit
2454	use of memory barriers unnecessary, there are a couple of situations where they
2455	might be needed:
2456	
2457	 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2458	     so for _all_ general drivers locks should be used and mmiowb() must be
2459	     issued prior to unlocking the critical section.
2460	
2461	 (2) If the accessor functions are used to refer to an I/O memory window with
2462	     relaxed memory access properties, then _mandatory_ memory barriers are
2463	     required to enforce ordering.
2464	
2465	See Documentation/DocBook/deviceiobook.tmpl for more information.
2466	
2467	
2468	INTERRUPTS
2469	----------
2470	
2471	A driver may be interrupted by its own interrupt service routine, and thus the
2472	two parts of the driver may interfere with each other's attempts to control or
2473	access the device.
2474	
2475	This may be alleviated - at least in part - by disabling local interrupts (a
2476	form of locking), such that the critical operations are all contained within
2477	the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2478	routine is executing, the driver's core may not run on the same CPU, and its
2479	interrupt is not permitted to happen again until the current interrupt has been
2480	handled, thus the interrupt handler does not need to lock against that.
2481	
2482	However, consider a driver that was talking to an ethernet card that sports an
2483	address register and a data register.  If that driver's core talks to the card
2484	under interrupt-disablement and then the driver's interrupt handler is invoked:
2485	
2486		LOCAL IRQ DISABLE
2487		writew(ADDR, 3);
2488		writew(DATA, y);
2489		LOCAL IRQ ENABLE
2490		<interrupt>
2491		writew(ADDR, 4);
2492		q = readw(DATA);
2493		</interrupt>
2494	
2495	The store to the data register might happen after the second store to the
2496	address register if ordering rules are sufficiently relaxed:
2497	
2498		STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2499	
2500	
2501	If ordering rules are relaxed, it must be assumed that accesses done inside an
2502	interrupt disabled section may leak outside of it and may interleave with
2503	accesses performed in an interrupt - and vice versa - unless implicit or
2504	explicit barriers are used.
2505	
2506	Normally this won't be a problem because the I/O accesses done inside such
2507	sections will include synchronous load operations on strictly ordered I/O
2508	registers that form implicit I/O barriers. If this isn't sufficient then an
2509	mmiowb() may need to be used explicitly.
2510	
2511	
2512	A similar situation may occur between an interrupt routine and two routines
2513	running on separate CPUs that communicate with each other. If such a case is
2514	likely, then interrupt-disabling locks should be used to guarantee ordering.
2515	
2516	
2517	==========================
2518	KERNEL I/O BARRIER EFFECTS
2519	==========================
2520	
2521	When accessing I/O memory, drivers should use the appropriate accessor
2522	functions:
2523	
2524	 (*) inX(), outX():
2525	
2526	     These are intended to talk to I/O space rather than memory space, but
2527	     that's primarily a CPU-specific concept. The i386 and x86_64 processors do
2528	     indeed have special I/O space access cycles and instructions, but many
2529	     CPUs don't have such a concept.
2530	
2531	     The PCI bus, amongst others, defines an I/O space concept which - on such
2532	     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2533	     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2534	     memory map, particularly on those CPUs that don't support alternate I/O
2535	     spaces.
2536	
2537	     Accesses to this space may be fully synchronous (as on i386), but
2538	     intermediary bridges (such as the PCI host bridge) may not fully honour
2539	     that.
2540	
2541	     They are guaranteed to be fully ordered with respect to each other.
2542	
2543	     They are not guaranteed to be fully ordered with respect to other types of
2544	     memory and I/O operation.
2545	
2546	 (*) readX(), writeX():
2547	
2548	     Whether these are guaranteed to be fully ordered and uncombined with
2549	     respect to each other on the issuing CPU depends on the characteristics
2550	     defined for the memory window through which they're accessing. On later
2551	     i386 architecture machines, for example, this is controlled by way of the
2552	     MTRR registers.
2553	
2554	     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2555	     provided they're not accessing a prefetchable device.
2556	
2557	     However, intermediary hardware (such as a PCI bridge) may indulge in
2558	     deferral if it so wishes; to flush a store, a load from the same location
2559	     is preferred[*], but a load from the same device or from configuration
2560	     space should suffice for PCI.
2561	
2562	     [*] NOTE! attempting to load from the same location as was written to may
2563		 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2564		 example.
2565	
2566	     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2567	     force stores to be ordered.
2568	
2569	     Please refer to the PCI specification for more information on interactions
2570	     between PCI transactions.
2571	
2572	 (*) readX_relaxed(), writeX_relaxed()
2573	
2574	     These are similar to readX() and writeX(), but provide weaker memory
2575	     ordering guarantees. Specifically, they do not guarantee ordering with
2576	     respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2577	     ordering with respect to LOCK or UNLOCK operations. If the latter is
2578	     required, an mmiowb() barrier can be used. Note that relaxed accesses to
2579	     the same peripheral are guaranteed to be ordered with respect to each
2580	     other.
2581	
2582	 (*) ioreadX(), iowriteX()
2583	
2584	     These will perform appropriately for the type of access they're actually
2585	     doing, be it inX()/outX() or readX()/writeX().
2586	
2587	
2588	========================================
2589	ASSUMED MINIMUM EXECUTION ORDERING MODEL
2590	========================================
2591	
2592	It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2593	maintain the appearance of program causality with respect to itself.  Some CPUs
2594	(such as i386 or x86_64) are more constrained than others (such as powerpc or
2595	frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2596	of arch-specific code.
2597	
2598	This means that it must be considered that the CPU will execute its instruction
2599	stream in any order it feels like - or even in parallel - provided that if an
2600	instruction in the stream depends on an earlier instruction, then that
2601	earlier instruction must be sufficiently complete[*] before the later
2602	instruction may proceed; in other words: provided that the appearance of
2603	causality is maintained.
2604	
2605	 [*] Some instructions have more than one effect - such as changing the
2606	     condition codes, changing registers or changing memory - and different
2607	     instructions may depend on different effects.
2608	
2609	A CPU may also discard any instruction sequence that winds up having no
2610	ultimate effect.  For example, if two adjacent instructions both load an
2611	immediate value into the same register, the first may be discarded.
2612	
2613	
2614	Similarly, it has to be assumed that compiler might reorder the instruction
2615	stream in any way it sees fit, again provided the appearance of causality is
2616	maintained.
2617	
2618	
2619	============================
2620	THE EFFECTS OF THE CPU CACHE
2621	============================
2622	
2623	The way cached memory operations are perceived across the system is affected to
2624	a certain extent by the caches that lie between CPUs and memory, and by the
2625	memory coherence system that maintains the consistency of state in the system.
2626	
2627	As far as the way a CPU interacts with another part of the system through the
2628	caches goes, the memory system has to include the CPU's caches, and memory
2629	barriers for the most part act at the interface between the CPU and its cache
2630	(memory barriers logically act on the dotted line in the following diagram):
2631	
2632		    <--- CPU --->         :       <----------- Memory ----------->
2633		                          :
2634		+--------+    +--------+  :   +--------+    +-----------+
2635		|        |    |        |  :   |        |    |           |    +--------+
2636		|  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2637		|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2638		|        |    | Queue  |  :   |        |    |           |--->| Memory |
2639		|        |    |        |  :   |        |    |           |    |        |
2640		+--------+    +--------+  :   +--------+    |           |    |        |
2641		                          :                 | Cache     |    +--------+
2642		                          :                 | Coherency |
2643		                          :                 | Mechanism |    +--------+
2644		+--------+    +--------+  :   +--------+    |           |    |	      |
2645		|        |    |        |  :   |        |    |           |    |        |
2646		|  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2647		|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2648		|        |    | Queue  |  :   |        |    |           |    |        |
2649		|        |    |        |  :   |        |    |           |    +--------+
2650		+--------+    +--------+  :   +--------+    +-----------+
2651		                          :
2652		                          :
2653	
2654	Although any particular load or store may not actually appear outside of the
2655	CPU that issued it since it may have been satisfied within the CPU's own cache,
2656	it will still appear as if the full memory access had taken place as far as the
2657	other CPUs are concerned since the cache coherency mechanisms will migrate the
2658	cacheline over to the accessing CPU and propagate the effects upon conflict.
2659	
2660	The CPU core may execute instructions in any order it deems fit, provided the
2661	expected program causality appears to be maintained.  Some of the instructions
2662	generate load and store operations which then go into the queue of memory
2663	accesses to be performed.  The core may place these in the queue in any order
2664	it wishes, and continue execution until it is forced to wait for an instruction
2665	to complete.
2666	
2667	What memory barriers are concerned with is controlling the order in which
2668	accesses cross from the CPU side of things to the memory side of things, and
2669	the order in which the effects are perceived to happen by the other observers
2670	in the system.
2671	
2672	[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2673	their own loads and stores as if they had happened in program order.
2674	
2675	[!] MMIO or other device accesses may bypass the cache system.  This depends on
2676	the properties of the memory window through which devices are accessed and/or
2677	the use of any special device communication instructions the CPU may have.
2678	
2679	
2680	CACHE COHERENCY
2681	---------------
2682	
2683	Life isn't quite as simple as it may appear above, however: for while the
2684	caches are expected to be coherent, there's no guarantee that that coherency
2685	will be ordered.  This means that whilst changes made on one CPU will
2686	eventually become visible on all CPUs, there's no guarantee that they will
2687	become apparent in the same order on those other CPUs.
2688	
2689	
2690	Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2691	has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2692	
2693		            :
2694		            :                          +--------+
2695		            :      +---------+         |        |
2696		+--------+  : +--->| Cache A |<------->|        |
2697		|        |  : |    +---------+         |        |
2698		|  CPU 1 |<---+                        |        |
2699		|        |  : |    +---------+         |        |
2700		+--------+  : +--->| Cache B |<------->|        |
2701		            :      +---------+         |        |
2702		            :                          | Memory |
2703		            :      +---------+         | System |
2704		+--------+  : +--->| Cache C |<------->|        |
2705		|        |  : |    +---------+         |        |
2706		|  CPU 2 |<---+                        |        |
2707		|        |  : |    +---------+         |        |
2708		+--------+  : +--->| Cache D |<------->|        |
2709		            :      +---------+         |        |
2710		            :                          +--------+
2711		            :
2712	
2713	Imagine the system has the following properties:
2714	
2715	 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2716	     resident in memory;
2717	
2718	 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2719	     resident in memory;
2720	
2721	 (*) whilst the CPU core is interrogating one cache, the other cache may be
2722	     making use of the bus to access the rest of the system - perhaps to
2723	     displace a dirty cacheline or to do a speculative load;
2724	
2725	 (*) each cache has a queue of operations that need to be applied to that cache
2726	     to maintain coherency with the rest of the system;
2727	
2728	 (*) the coherency queue is not flushed by normal loads to lines already
2729	     present in the cache, even though the contents of the queue may
2730	     potentially affect those loads.
2731	
2732	Imagine, then, that two writes are made on the first CPU, with a write barrier
2733	between them to guarantee that they will appear to reach that CPU's caches in
2734	the requisite order:
2735	
2736		CPU 1		CPU 2		COMMENT
2737		===============	===============	=======================================
2738						u == 0, v == 1 and p == &u, q == &u
2739		v = 2;
2740		smp_wmb();			Make sure change to v is visible before
2741						 change to p
2742		<A:modify v=2>			v is now in cache A exclusively
2743		p = &v;
2744		<B:modify p=&v>			p is now in cache B exclusively
2745	
2746	The write memory barrier forces the other CPUs in the system to perceive that
2747	the local CPU's caches have apparently been updated in the correct order.  But
2748	now imagine that the second CPU wants to read those values:
2749	
2750		CPU 1		CPU 2		COMMENT
2751		===============	===============	=======================================
2752		...
2753				q = p;
2754				x = *q;
2755	
2756	The above pair of reads may then fail to happen in the expected order, as the
2757	cacheline holding p may get updated in one of the second CPU's caches whilst
2758	the update to the cacheline holding v is delayed in the other of the second
2759	CPU's caches by some other cache event:
2760	
2761		CPU 1		CPU 2		COMMENT
2762		===============	===============	=======================================
2763						u == 0, v == 1 and p == &u, q == &u
2764		v = 2;
2765		smp_wmb();
2766		<A:modify v=2>	<C:busy>
2767				<C:queue v=2>
2768		p = &v;		q = p;
2769				<D:request p>
2770		<B:modify p=&v>	<D:commit p=&v>
2771				<D:read p>
2772				x = *q;
2773				<C:read *q>	Reads from v before v updated in cache
2774				<C:unbusy>
2775				<C:commit v=2>
2776	
2777	Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2778	no guarantee that, without intervention, the order of update will be the same
2779	as that committed on CPU 1.
2780	
2781	
2782	To intervene, we need to interpolate a data dependency barrier or a read
2783	barrier between the loads.  This will force the cache to commit its coherency
2784	queue before processing any further requests:
2785	
2786		CPU 1		CPU 2		COMMENT
2787		===============	===============	=======================================
2788						u == 0, v == 1 and p == &u, q == &u
2789		v = 2;
2790		smp_wmb();
2791		<A:modify v=2>	<C:busy>
2792				<C:queue v=2>
2793		p = &v;		q = p;
2794				<D:request p>
2795		<B:modify p=&v>	<D:commit p=&v>
2796				<D:read p>
2797				smp_read_barrier_depends()
2798				<C:unbusy>
2799				<C:commit v=2>
2800				x = *q;
2801				<C:read *q>	Reads from v after v updated in cache
2802	
2803	
2804	This sort of problem can be encountered on DEC Alpha processors as they have a
2805	split cache that improves performance by making better use of the data bus.
2806	Whilst most CPUs do imply a data dependency barrier on the read when a memory
2807	access depends on a read, not all do, so it may not be relied on.
2808	
2809	Other CPUs may also have split caches, but must coordinate between the various
2810	cachelets for normal memory accesses.  The semantics of the Alpha removes the
2811	need for coordination in the absence of memory barriers.
2812	
2813	
2814	CACHE COHERENCY VS DMA
2815	----------------------
2816	
2817	Not all systems maintain cache coherency with respect to devices doing DMA.  In
2818	such cases, a device attempting DMA may obtain stale data from RAM because
2819	dirty cache lines may be resident in the caches of various CPUs, and may not
2820	have been written back to RAM yet.  To deal with this, the appropriate part of
2821	the kernel must flush the overlapping bits of cache on each CPU (and maybe
2822	invalidate them as well).
2823	
2824	In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2825	cache lines being written back to RAM from a CPU's cache after the device has
2826	installed its own data, or cache lines present in the CPU's cache may simply
2827	obscure the fact that RAM has been updated, until at such time as the cacheline
2828	is discarded from the CPU's cache and reloaded.  To deal with this, the
2829	appropriate part of the kernel must invalidate the overlapping bits of the
2830	cache on each CPU.
2831	
2832	See Documentation/cachetlb.txt for more information on cache management.
2833	
2834	
2835	CACHE COHERENCY VS MMIO
2836	-----------------------
2837	
2838	Memory mapped I/O usually takes place through memory locations that are part of
2839	a window in the CPU's memory space that has different properties assigned than
2840	the usual RAM directed window.
2841	
2842	Amongst these properties is usually the fact that such accesses bypass the
2843	caching entirely and go directly to the device buses.  This means MMIO accesses
2844	may, in effect, overtake accesses to cached memory that were emitted earlier.
2845	A memory barrier isn't sufficient in such a case, but rather the cache must be
2846	flushed between the cached memory write and the MMIO access if the two are in
2847	any way dependent.
2848	
2849	
2850	=========================
2851	THE THINGS CPUS GET UP TO
2852	=========================
2853	
2854	A programmer might take it for granted that the CPU will perform memory
2855	operations in exactly the order specified, so that if the CPU is, for example,
2856	given the following piece of code to execute:
2857	
2858		a = ACCESS_ONCE(*A);
2859		ACCESS_ONCE(*B) = b;
2860		c = ACCESS_ONCE(*C);
2861		d = ACCESS_ONCE(*D);
2862		ACCESS_ONCE(*E) = e;
2863	
2864	they would then expect that the CPU will complete the memory operation for each
2865	instruction before moving on to the next one, leading to a definite sequence of
2866	operations as seen by external observers in the system:
2867	
2868		LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2869	
2870	
2871	Reality is, of course, much messier.  With many CPUs and compilers, the above
2872	assumption doesn't hold because:
2873	
2874	 (*) loads are more likely to need to be completed immediately to permit
2875	     execution progress, whereas stores can often be deferred without a
2876	     problem;
2877	
2878	 (*) loads may be done speculatively, and the result discarded should it prove
2879	     to have been unnecessary;
2880	
2881	 (*) loads may be done speculatively, leading to the result having been fetched
2882	     at the wrong time in the expected sequence of events;
2883	
2884	 (*) the order of the memory accesses may be rearranged to promote better use
2885	     of the CPU buses and caches;
2886	
2887	 (*) loads and stores may be combined to improve performance when talking to
2888	     memory or I/O hardware that can do batched accesses of adjacent locations,
2889	     thus cutting down on transaction setup costs (memory and PCI devices may
2890	     both be able to do this); and
2891	
2892	 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2893	     mechanisms may alleviate this - once the store has actually hit the cache
2894	     - there's no guarantee that the coherency management will be propagated in
2895	     order to other CPUs.
2896	
2897	So what another CPU, say, might actually observe from the above piece of code
2898	is:
2899	
2900		LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2901	
2902		(Where "LOAD {*C,*D}" is a combined load)
2903	
2904	
2905	However, it is guaranteed that a CPU will be self-consistent: it will see its
2906	_own_ accesses appear to be correctly ordered, without the need for a memory
2907	barrier.  For instance with the following code:
2908	
2909		U = ACCESS_ONCE(*A);
2910		ACCESS_ONCE(*A) = V;
2911		ACCESS_ONCE(*A) = W;
2912		X = ACCESS_ONCE(*A);
2913		ACCESS_ONCE(*A) = Y;
2914		Z = ACCESS_ONCE(*A);
2915	
2916	and assuming no intervention by an external influence, it can be assumed that
2917	the final result will appear to be:
2918	
2919		U == the original value of *A
2920		X == W
2921		Z == Y
2922		*A == Y
2923	
2924	The code above may cause the CPU to generate the full sequence of memory
2925	accesses:
2926	
2927		U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2928	
2929	in that order, but, without intervention, the sequence may have almost any
2930	combination of elements combined or discarded, provided the program's view of
2931	the world remains consistent.  Note that ACCESS_ONCE() is -not- optional
2932	in the above example, as there are architectures where a given CPU might
2933	reorder successive loads to the same location.  On such architectures,
2934	ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
2935	Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
2936	special ld.acq and st.rel instructions that prevent such reordering.
2937	
2938	The compiler may also combine, discard or defer elements of the sequence before
2939	the CPU even sees them.
2940	
2941	For instance:
2942	
2943		*A = V;
2944		*A = W;
2945	
2946	may be reduced to:
2947	
2948		*A = W;
2949	
2950	since, without either a write barrier or an ACCESS_ONCE(), it can be
2951	assumed that the effect of the storage of V to *A is lost.  Similarly:
2952	
2953		*A = Y;
2954		Z = *A;
2955	
2956	may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
2957	
2958		*A = Y;
2959		Z = Y;
2960	
2961	and the LOAD operation never appear outside of the CPU.
2962	
2963	
2964	AND THEN THERE'S THE ALPHA
2965	--------------------------
2966	
2967	The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
2968	some versions of the Alpha CPU have a split data cache, permitting them to have
2969	two semantically-related cache lines updated at separate times.  This is where
2970	the data dependency barrier really becomes necessary as this synchronises both
2971	caches with the memory coherence system, thus making it seem like pointer
2972	changes vs new data occur in the right order.
2973	
2974	The Alpha defines the Linux kernel's memory barrier model.
2975	
2976	See the subsection on "Cache Coherency" above.
2977	
2978	
2979	============
2980	EXAMPLE USES
2981	============
2982	
2983	CIRCULAR BUFFERS
2984	----------------
2985	
2986	Memory barriers can be used to implement circular buffering without the need
2987	of a lock to serialise the producer with the consumer.  See:
2988	
2989		Documentation/circular-buffers.txt
2990	
2991	for details.
2992	
2993	
2994	==========
2995	REFERENCES
2996	==========
2997	
2998	Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2999	Digital Press)
3000		Chapter 5.2: Physical Address Space Characteristics
3001		Chapter 5.4: Caches and Write Buffers
3002		Chapter 5.5: Data Sharing
3003		Chapter 5.6: Read/Write Ordering
3004	
3005	AMD64 Architecture Programmer's Manual Volume 2: System Programming
3006		Chapter 7.1: Memory-Access Ordering
3007		Chapter 7.4: Buffering and Combining Memory Writes
3008	
3009	IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3010	System Programming Guide
3011		Chapter 7.1: Locked Atomic Operations
3012		Chapter 7.2: Memory Ordering
3013		Chapter 7.4: Serializing Instructions
3014	
3015	The SPARC Architecture Manual, Version 9
3016		Chapter 8: Memory Models
3017		Appendix D: Formal Specification of the Memory Models
3018		Appendix J: Programming with the Memory Models
3019	
3020	UltraSPARC Programmer Reference Manual
3021		Chapter 5: Memory Accesses and Cacheability
3022		Chapter 15: Sparc-V9 Memory Models
3023	
3024	UltraSPARC III Cu User's Manual
3025		Chapter 9: Memory Models
3026	
3027	UltraSPARC IIIi Processor User's Manual
3028		Chapter 8: Memory Models
3029	
3030	UltraSPARC Architecture 2005
3031		Chapter 9: Memory
3032		Appendix D: Formal Specifications of the Memory Models
3033	
3034	UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3035		Chapter 8: Memory Models
3036		Appendix F: Caches and Cache Coherency
3037	
3038	Solaris Internals, Core Kernel Architecture, p63-68:
3039		Chapter 3.3: Hardware Considerations for Locks and
3040				Synchronization
3041	
3042	Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3043	for Kernel Programmers:
3044		Chapter 13: Other Memory Models
3045	
3046	Intel Itanium Architecture Software Developer's Manual: Volume 1:
3047		Section 2.6: Speculation
3048		Section 4.4: Memory Access
Hide Line Numbers
About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Information is copyright its respective author. All material is available from the Linux Kernel Source distributed under a GPL License. This page is provided as a free service by mjmwired.net.