About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / locking / rt-mutex-design.txt


Based on kernel version 4.16.1. Page generated on 2018-04-09 11:53 EST.

1	#
2	# Copyright (c) 2006 Steven Rostedt
3	# Licensed under the GNU Free Documentation License, Version 1.2
4	#
5	
6	RT-mutex implementation design
7	------------------------------
8	
9	This document tries to describe the design of the rtmutex.c implementation.
10	It doesn't describe the reasons why rtmutex.c exists. For that please see
11	Documentation/locking/rt-mutex.txt.  Although this document does explain problems
12	that happen without this code, but that is in the concept to understand
13	what the code actually is doing.
14	
15	The goal of this document is to help others understand the priority
16	inheritance (PI) algorithm that is used, as well as reasons for the
17	decisions that were made to implement PI in the manner that was done.
18	
19	
20	Unbounded Priority Inversion
21	----------------------------
22	
23	Priority inversion is when a lower priority process executes while a higher
24	priority process wants to run.  This happens for several reasons, and
25	most of the time it can't be helped.  Anytime a high priority process wants
26	to use a resource that a lower priority process has (a mutex for example),
27	the high priority process must wait until the lower priority process is done
28	with the resource.  This is a priority inversion.  What we want to prevent
29	is something called unbounded priority inversion.  That is when the high
30	priority process is prevented from running by a lower priority process for
31	an undetermined amount of time.
32	
33	The classic example of unbounded priority inversion is where you have three
34	processes, let's call them processes A, B, and C, where A is the highest
35	priority process, C is the lowest, and B is in between. A tries to grab a lock
36	that C owns and must wait and lets C run to release the lock. But in the
37	meantime, B executes, and since B is of a higher priority than C, it preempts C,
38	but by doing so, it is in fact preempting A which is a higher priority process.
39	Now there's no way of knowing how long A will be sleeping waiting for C
40	to release the lock, because for all we know, B is a CPU hog and will
41	never give C a chance to release the lock.  This is called unbounded priority
42	inversion.
43	
44	Here's a little ASCII art to show the problem.
45	
46	   grab lock L1 (owned by C)
47	     |
48	A ---+
49	        C preempted by B
50	          |
51	C    +----+
52	
53	B         +-------->
54	                B now keeps A from running.
55	
56	
57	Priority Inheritance (PI)
58	-------------------------
59	
60	There are several ways to solve this issue, but other ways are out of scope
61	for this document.  Here we only discuss PI.
62	
63	PI is where a process inherits the priority of another process if the other
64	process blocks on a lock owned by the current process.  To make this easier
65	to understand, let's use the previous example, with processes A, B, and C again.
66	
67	This time, when A blocks on the lock owned by C, C would inherit the priority
68	of A.  So now if B becomes runnable, it would not preempt C, since C now has
69	the high priority of A.  As soon as C releases the lock, it loses its
70	inherited priority, and A then can continue with the resource that C had.
71	
72	Terminology
73	-----------
74	
75	Here I explain some terminology that is used in this document to help describe
76	the design that is used to implement PI.
77	
78	PI chain - The PI chain is an ordered series of locks and processes that cause
79	           processes to inherit priorities from a previous process that is
80	           blocked on one of its locks.  This is described in more detail
81	           later in this document.
82	
83	mutex    - In this document, to differentiate from locks that implement
84	           PI and spin locks that are used in the PI code, from now on
85	           the PI locks will be called a mutex.
86	
87	lock     - In this document from now on, I will use the term lock when
88	           referring to spin locks that are used to protect parts of the PI
89	           algorithm.  These locks disable preemption for UP (when
90	           CONFIG_PREEMPT is enabled) and on SMP prevents multiple CPUs from
91	           entering critical sections simultaneously.
92	
93	spin lock - Same as lock above.
94	
95	waiter   - A waiter is a struct that is stored on the stack of a blocked
96	           process.  Since the scope of the waiter is within the code for
97	           a process being blocked on the mutex, it is fine to allocate
98	           the waiter on the process's stack (local variable).  This
99	           structure holds a pointer to the task, as well as the mutex that
100	           the task is blocked on.  It also has rbtree node structures to
101	           place the task in the waiters rbtree of a mutex as well as the
102	           pi_waiters rbtree of a mutex owner task (described below).
103	
104	           waiter is sometimes used in reference to the task that is waiting
105	           on a mutex. This is the same as waiter->task.
106	
107	waiters  - A list of processes that are blocked on a mutex.
108	
109	top waiter - The highest priority process waiting on a specific mutex.
110	
111	top pi waiter - The highest priority process waiting on one of the mutexes
112	                that a specific process owns.
113	
114	Note:  task and process are used interchangeably in this document, mostly to
115	       differentiate between two processes that are being described together.
116	
117	
118	PI chain
119	--------
120	
121	The PI chain is a list of processes and mutexes that may cause priority
122	inheritance to take place.  Multiple chains may converge, but a chain
123	would never diverge, since a process can't be blocked on more than one
124	mutex at a time.
125	
126	Example:
127	
128	   Process:  A, B, C, D, E
129	   Mutexes:  L1, L2, L3, L4
130	
131	   A owns: L1
132	           B blocked on L1
133	           B owns L2
134	                  C blocked on L2
135	                  C owns L3
136	                         D blocked on L3
137	                         D owns L4
138	                                E blocked on L4
139	
140	The chain would be:
141	
142	   E->L4->D->L3->C->L2->B->L1->A
143	
144	To show where two chains merge, we could add another process F and
145	another mutex L5 where B owns L5 and F is blocked on mutex L5.
146	
147	The chain for F would be:
148	
149	   F->L5->B->L1->A
150	
151	Since a process may own more than one mutex, but never be blocked on more than
152	one, the chains merge.
153	
154	Here we show both chains:
155	
156	   E->L4->D->L3->C->L2-+
157	                       |
158	                       +->B->L1->A
159	                       |
160	                 F->L5-+
161	
162	For PI to work, the processes at the right end of these chains (or we may
163	also call it the Top of the chain) must be equal to or higher in priority
164	than the processes to the left or below in the chain.
165	
166	Also since a mutex may have more than one process blocked on it, we can
167	have multiple chains merge at mutexes.  If we add another process G that is
168	blocked on mutex L2:
169	
170	  G->L2->B->L1->A
171	
172	And once again, to show how this can grow I will show the merging chains
173	again.
174	
175	   E->L4->D->L3->C-+
176	                   +->L2-+
177	                   |     |
178	                 G-+     +->B->L1->A
179	                         |
180	                   F->L5-+
181	
182	If process G has the highest priority in the chain, then all the tasks up
183	the chain (A and B in this example), must have their priorities increased
184	to that of G.
185	
186	Mutex Waiters Tree
187	-----------------
188	
189	Every mutex keeps track of all the waiters that are blocked on itself. The
190	mutex has a rbtree to store these waiters by priority.  This tree is protected
191	by a spin lock that is located in the struct of the mutex. This lock is called
192	wait_lock.
193	
194	
195	Task PI Tree
196	------------
197	
198	To keep track of the PI chains, each process has its own PI rbtree.  This is
199	a tree of all top waiters of the mutexes that are owned by the process.
200	Note that this tree only holds the top waiters and not all waiters that are
201	blocked on mutexes owned by the process.
202	
203	The top of the task's PI tree is always the highest priority task that
204	is waiting on a mutex that is owned by the task.  So if the task has
205	inherited a priority, it will always be the priority of the task that is
206	at the top of this tree.
207	
208	This tree is stored in the task structure of a process as a rbtree called
209	pi_waiters.  It is protected by a spin lock also in the task structure,
210	called pi_lock.  This lock may also be taken in interrupt context, so when
211	locking the pi_lock, interrupts must be disabled.
212	
213	
214	Depth of the PI Chain
215	---------------------
216	
217	The maximum depth of the PI chain is not dynamic, and could actually be
218	defined.  But is very complex to figure it out, since it depends on all
219	the nesting of mutexes.  Let's look at the example where we have 3 mutexes,
220	L1, L2, and L3, and four separate functions func1, func2, func3 and func4.
221	The following shows a locking order of L1->L2->L3, but may not actually
222	be directly nested that way.
223	
224	void func1(void)
225	{
226		mutex_lock(L1);
227	
228		/* do anything */
229	
230		mutex_unlock(L1);
231	}
232	
233	void func2(void)
234	{
235		mutex_lock(L1);
236		mutex_lock(L2);
237	
238		/* do something */
239	
240		mutex_unlock(L2);
241		mutex_unlock(L1);
242	}
243	
244	void func3(void)
245	{
246		mutex_lock(L2);
247		mutex_lock(L3);
248	
249		/* do something else */
250	
251		mutex_unlock(L3);
252		mutex_unlock(L2);
253	}
254	
255	void func4(void)
256	{
257		mutex_lock(L3);
258	
259		/* do something again */
260	
261		mutex_unlock(L3);
262	}
263	
264	Now we add 4 processes that run each of these functions separately.
265	Processes A, B, C, and D which run functions func1, func2, func3 and func4
266	respectively, and such that D runs first and A last.  With D being preempted
267	in func4 in the "do something again" area, we have a locking that follows:
268	
269	D owns L3
270	       C blocked on L3
271	       C owns L2
272	              B blocked on L2
273	              B owns L1
274	                     A blocked on L1
275	
276	And thus we have the chain A->L1->B->L2->C->L3->D.
277	
278	This gives us a PI depth of 4 (four processes), but looking at any of the
279	functions individually, it seems as though they only have at most a locking
280	depth of two.  So, although the locking depth is defined at compile time,
281	it still is very difficult to find the possibilities of that depth.
282	
283	Now since mutexes can be defined by user-land applications, we don't want a DOS
284	type of application that nests large amounts of mutexes to create a large
285	PI chain, and have the code holding spin locks while looking at a large
286	amount of data.  So to prevent this, the implementation not only implements
287	a maximum lock depth, but also only holds at most two different locks at a
288	time, as it walks the PI chain.  More about this below.
289	
290	
291	Mutex owner and flags
292	---------------------
293	
294	The mutex structure contains a pointer to the owner of the mutex.  If the
295	mutex is not owned, this owner is set to NULL.  Since all architectures
296	have the task structure on at least a two byte alignment (and if this is
297	not true, the rtmutex.c code will be broken!), this allows for the least
298	significant bit to be used as a flag.  Bit 0 is used as the "Has Waiters"
299	flag. It's set whenever there are waiters on a mutex.
300	
301	See Documentation/locking/rt-mutex.txt for further details.
302	
303	cmpxchg Tricks
304	--------------
305	
306	Some architectures implement an atomic cmpxchg (Compare and Exchange).  This
307	is used (when applicable) to keep the fast path of grabbing and releasing
308	mutexes short.
309	
310	cmpxchg is basically the following function performed atomically:
311	
312	unsigned long _cmpxchg(unsigned long *A, unsigned long *B, unsigned long *C)
313	{
314		unsigned long T = *A;
315		if (*A == *B) {
316			*A = *C;
317		}
318		return T;
319	}
320	#define cmpxchg(a,b,c) _cmpxchg(&a,&b,&c)
321	
322	This is really nice to have, since it allows you to only update a variable
323	if the variable is what you expect it to be.  You know if it succeeded if
324	the return value (the old value of A) is equal to B.
325	
326	The macro rt_mutex_cmpxchg is used to try to lock and unlock mutexes. If
327	the architecture does not support CMPXCHG, then this macro is simply set
328	to fail every time.  But if CMPXCHG is supported, then this will
329	help out extremely to keep the fast path short.
330	
331	The use of rt_mutex_cmpxchg with the flags in the owner field help optimize
332	the system for architectures that support it.  This will also be explained
333	later in this document.
334	
335	
336	Priority adjustments
337	--------------------
338	
339	The implementation of the PI code in rtmutex.c has several places that a
340	process must adjust its priority.  With the help of the pi_waiters of a
341	process this is rather easy to know what needs to be adjusted.
342	
343	The functions implementing the task adjustments are rt_mutex_adjust_prio
344	and rt_mutex_setprio. rt_mutex_setprio is only used in rt_mutex_adjust_prio.
345	
346	rt_mutex_adjust_prio examines the priority of the task, and the highest
347	priority process that is waiting any of mutexes owned by the task. Since
348	the pi_waiters of a task holds an order by priority of all the top waiters
349	of all the mutexes that the task owns, we simply need to compare the top
350	pi waiter to its own normal/deadline priority and take the higher one.
351	Then rt_mutex_setprio is called to adjust the priority of the task to the
352	new priority. Note that rt_mutex_setprio is defined in kernel/sched/core.c
353	to implement the actual change in priority.
354	
355	(Note:  For the "prio" field in task_struct, the lower the number, the
356		higher the priority. A "prio" of 5 is of higher priority than a
357		"prio" of 10.)
358	
359	It is interesting to note that rt_mutex_adjust_prio can either increase
360	or decrease the priority of the task.  In the case that a higher priority
361	process has just blocked on a mutex owned by the task, rt_mutex_adjust_prio
362	would increase/boost the task's priority.  But if a higher priority task
363	were for some reason to leave the mutex (timeout or signal), this same function
364	would decrease/unboost the priority of the task.  That is because the pi_waiters
365	always contains the highest priority task that is waiting on a mutex owned
366	by the task, so we only need to compare the priority of that top pi waiter
367	to the normal priority of the given task.
368	
369	
370	High level overview of the PI chain walk
371	----------------------------------------
372	
373	The PI chain walk is implemented by the function rt_mutex_adjust_prio_chain.
374	
375	The implementation has gone through several iterations, and has ended up
376	with what we believe is the best.  It walks the PI chain by only grabbing
377	at most two locks at a time, and is very efficient.
378	
379	The rt_mutex_adjust_prio_chain can be used either to boost or lower process
380	priorities.
381	
382	rt_mutex_adjust_prio_chain is called with a task to be checked for PI
383	(de)boosting (the owner of a mutex that a process is blocking on), a flag to
384	check for deadlocking, the mutex that the task owns, a pointer to a waiter
385	that is the process's waiter struct that is blocked on the mutex (although this
386	parameter may be NULL for deboosting), a pointer to the mutex on which the task
387	is blocked, and a top_task as the top waiter of the mutex.
388	
389	For this explanation, I will not mention deadlock detection. This explanation
390	will try to stay at a high level.
391	
392	When this function is called, there are no locks held.  That also means
393	that the state of the owner and lock can change when entered into this function.
394	
395	Before this function is called, the task has already had rt_mutex_adjust_prio
396	performed on it.  This means that the task is set to the priority that it
397	should be at, but the rbtree nodes of the task's waiter have not been updated
398	with the new priorities, and this task may not be in the proper locations
399	in the pi_waiters and waiters trees that the task is blocked on. This function
400	solves all that.
401	
402	The main operation of this function is summarized by Thomas Gleixner in
403	rtmutex.c. See the 'Chain walk basics and protection scope' comment for further
404	details.
405	
406	Taking of a mutex (The walk through)
407	------------------------------------
408	
409	OK, now let's take a look at the detailed walk through of what happens when
410	taking a mutex.
411	
412	The first thing that is tried is the fast taking of the mutex.  This is
413	done when we have CMPXCHG enabled (otherwise the fast taking automatically
414	fails).  Only when the owner field of the mutex is NULL can the lock be
415	taken with the CMPXCHG and nothing else needs to be done.
416	
417	If there is contention on the lock, we go about the slow path
418	(rt_mutex_slowlock).
419	
420	The slow path function is where the task's waiter structure is created on
421	the stack.  This is because the waiter structure is only needed for the
422	scope of this function.  The waiter structure holds the nodes to store
423	the task on the waiters tree of the mutex, and if need be, the pi_waiters
424	tree of the owner.
425	
426	The wait_lock of the mutex is taken since the slow path of unlocking the
427	mutex also takes this lock.
428	
429	We then call try_to_take_rt_mutex.  This is where the architecture that
430	does not implement CMPXCHG would always grab the lock (if there's no
431	contention).
432	
433	try_to_take_rt_mutex is used every time the task tries to grab a mutex in the
434	slow path.  The first thing that is done here is an atomic setting of
435	the "Has Waiters" flag of the mutex's owner field. By setting this flag
436	now, the current owner of the mutex being contended for can't release the mutex
437	without going into the slow unlock path, and it would then need to grab the
438	wait_lock, which this code currently holds. So setting the "Has Waiters" flag
439	forces the current owner to synchronize with this code.
440	
441	The lock is taken if the following are true:
442	   1) The lock has no owner
443	   2) The current task is the highest priority against all other
444	      waiters of the lock
445	
446	If the task succeeds to acquire the lock, then the task is set as the
447	owner of the lock, and if the lock still has waiters, the top_waiter
448	(highest priority task waiting on the lock) is added to this task's
449	pi_waiters tree.
450	
451	If the lock is not taken by try_to_take_rt_mutex(), then the
452	task_blocks_on_rt_mutex() function is called. This will add the task to
453	the lock's waiter tree and propagate the pi chain of the lock as well
454	as the lock's owner's pi_waiters tree. This is described in the next
455	section.
456	
457	Task blocks on mutex
458	--------------------
459	
460	The accounting of a mutex and process is done with the waiter structure of
461	the process.  The "task" field is set to the process, and the "lock" field
462	to the mutex.  The rbtree node of waiter are initialized to the processes
463	current priority.
464	
465	Since the wait_lock was taken at the entry of the slow lock, we can safely
466	add the waiter to the task waiter tree.  If the current process is the
467	highest priority process currently waiting on this mutex, then we remove the
468	previous top waiter process (if it exists) from the pi_waiters of the owner,
469	and add the current process to that tree.  Since the pi_waiter of the owner
470	has changed, we call rt_mutex_adjust_prio on the owner to see if the owner
471	should adjust its priority accordingly.
472	
473	If the owner is also blocked on a lock, and had its pi_waiters changed
474	(or deadlock checking is on), we unlock the wait_lock of the mutex and go ahead
475	and run rt_mutex_adjust_prio_chain on the owner, as described earlier.
476	
477	Now all locks are released, and if the current process is still blocked on a
478	mutex (waiter "task" field is not NULL), then we go to sleep (call schedule).
479	
480	Waking up in the loop
481	---------------------
482	
483	The task can then wake up for a couple of reasons:
484	  1) The previous lock owner released the lock, and the task now is top_waiter
485	  2) we received a signal or timeout
486	
487	In both cases, the task will try again to acquire the lock. If it
488	does, then it will take itself off the waiters tree and set itself back
489	to the TASK_RUNNING state.
490	
491	In first case, if the lock was acquired by another task before this task
492	could get the lock, then it will go back to sleep and wait to be woken again.
493	
494	The second case is only applicable for tasks that are grabbing a mutex
495	that can wake up before getting the lock, either due to a signal or
496	a timeout (i.e. rt_mutex_timed_futex_lock()). When woken, it will try to
497	take the lock again, if it succeeds, then the task will return with the
498	lock held, otherwise it will return with -EINTR if the task was woken
499	by a signal, or -ETIMEDOUT if it timed out.
500	
501	
502	Unlocking the Mutex
503	-------------------
504	
505	The unlocking of a mutex also has a fast path for those architectures with
506	CMPXCHG.  Since the taking of a mutex on contention always sets the
507	"Has Waiters" flag of the mutex's owner, we use this to know if we need to
508	take the slow path when unlocking the mutex.  If the mutex doesn't have any
509	waiters, the owner field of the mutex would equal the current process and
510	the mutex can be unlocked by just replacing the owner field with NULL.
511	
512	If the owner field has the "Has Waiters" bit set (or CMPXCHG is not available),
513	the slow unlock path is taken.
514	
515	The first thing done in the slow unlock path is to take the wait_lock of the
516	mutex.  This synchronizes the locking and unlocking of the mutex.
517	
518	A check is made to see if the mutex has waiters or not.  On architectures that
519	do not have CMPXCHG, this is the location that the owner of the mutex will
520	determine if a waiter needs to be awoken or not.  On architectures that
521	do have CMPXCHG, that check is done in the fast path, but it is still needed
522	in the slow path too.  If a waiter of a mutex woke up because of a signal
523	or timeout between the time the owner failed the fast path CMPXCHG check and
524	the grabbing of the wait_lock, the mutex may not have any waiters, thus the
525	owner still needs to make this check. If there are no waiters then the mutex
526	owner field is set to NULL, the wait_lock is released and nothing more is
527	needed.
528	
529	If there are waiters, then we need to wake one up.
530	
531	On the wake up code, the pi_lock of the current owner is taken.  The top
532	waiter of the lock is found and removed from the waiters tree of the mutex
533	as well as the pi_waiters tree of the current owner. The "Has Waiters" bit is
534	marked to prevent lower priority tasks from stealing the lock.
535	
536	Finally we unlock the pi_lock of the pending owner and wake it up.
537	
538	
539	Contact
540	-------
541	
542	For updates on this document, please email Steven Rostedt <rostedt@goodmis.org>
543	
544	
545	Credits
546	-------
547	
548	Author:  Steven Rostedt <rostedt@goodmis.org>
549	Updated: Alex Shi <alex.shi@linaro.org>	- 7/6/2017
550	
551	Original Reviewers:  Ingo Molnar, Thomas Gleixner, Thomas Duetsch, and
552			     Randy Dunlap
553	Update (7/6/2017) Reviewers: Steven Rostedt and Sebastian Siewior
554	
555	Updates
556	-------
557	
558	This document was originally written for 2.6.17-rc3-mm1
559	was updated on 4.12
Hide Line Numbers


About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog