Based on kernel version 3.15.4. Page generated on 2014-07-07 09:04 EST.
1 Review Checklist for RCU Patches 2 3 4 This document contains a checklist for producing and reviewing patches 5 that make use of RCU. Violating any of the rules listed below will 6 result in the same sorts of problems that leaving out a locking primitive 7 would cause. This list is based on experiences reviewing such patches 8 over a rather long period of time, but improvements are always welcome! 9 10 0. Is RCU being applied to a read-mostly situation? If the data 11 structure is updated more than about 10% of the time, then you 12 should strongly consider some other approach, unless detailed 13 performance measurements show that RCU is nonetheless the right 14 tool for the job. Yes, RCU does reduce read-side overhead by 15 increasing write-side overhead, which is exactly why normal uses 16 of RCU will do much more reading than updating. 17 18 Another exception is where performance is not an issue, and RCU 19 provides a simpler implementation. An example of this situation 20 is the dynamic NMI code in the Linux 2.6 kernel, at least on 21 architectures where NMIs are rare. 22 23 Yet another exception is where the low real-time latency of RCU's 24 read-side primitives is critically important. 25 26 1. Does the update code have proper mutual exclusion? 27 28 RCU does allow -readers- to run (almost) naked, but -writers- must 29 still use some sort of mutual exclusion, such as: 30 31 a. locking, 32 b. atomic operations, or 33 c. restricting updates to a single task. 34 35 If you choose #b, be prepared to describe how you have handled 36 memory barriers on weakly ordered machines (pretty much all of 37 them -- even x86 allows later loads to be reordered to precede 38 earlier stores), and be prepared to explain why this added 39 complexity is worthwhile. If you choose #c, be prepared to 40 explain how this single task does not become a major bottleneck on 41 big multiprocessor machines (for example, if the task is updating 42 information relating to itself that other tasks can read, there 43 by definition can be no bottleneck). 44 45 2. Do the RCU read-side critical sections make proper use of 46 rcu_read_lock() and friends? These primitives are needed 47 to prevent grace periods from ending prematurely, which 48 could result in data being unceremoniously freed out from 49 under your read-side code, which can greatly increase the 50 actuarial risk of your kernel. 51 52 As a rough rule of thumb, any dereference of an RCU-protected 53 pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(), 54 rcu_read_lock_sched(), or by the appropriate update-side lock. 55 Disabling of preemption can serve as rcu_read_lock_sched(), but 56 is less readable. 57 58 3. Does the update code tolerate concurrent accesses? 59 60 The whole point of RCU is to permit readers to run without 61 any locks or atomic operations. This means that readers will 62 be running while updates are in progress. There are a number 63 of ways to handle this concurrency, depending on the situation: 64 65 a. Use the RCU variants of the list and hlist update 66 primitives to add, remove, and replace elements on 67 an RCU-protected list. Alternatively, use the other 68 RCU-protected data structures that have been added to 69 the Linux kernel. 70 71 This is almost always the best approach. 72 73 b. Proceed as in (a) above, but also maintain per-element 74 locks (that are acquired by both readers and writers) 75 that guard per-element state. Of course, fields that 76 the readers refrain from accessing can be guarded by 77 some other lock acquired only by updaters, if desired. 78 79 This works quite well, also. 80 81 c. Make updates appear atomic to readers. For example, 82 pointer updates to properly aligned fields will 83 appear atomic, as will individual atomic primitives. 84 Sequences of perations performed under a lock will -not- 85 appear to be atomic to RCU readers, nor will sequences 86 of multiple atomic primitives. 87 88 This can work, but is starting to get a bit tricky. 89 90 d. Carefully order the updates and the reads so that 91 readers see valid data at all phases of the update. 92 This is often more difficult than it sounds, especially 93 given modern CPUs' tendency to reorder memory references. 94 One must usually liberally sprinkle memory barriers 95 (smp_wmb(), smp_rmb(), smp_mb()) through the code, 96 making it difficult to understand and to test. 97 98 It is usually better to group the changing data into 99 a separate structure, so that the change may be made 100 to appear atomic by updating a pointer to reference 101 a new structure containing updated values. 102 103 4. Weakly ordered CPUs pose special challenges. Almost all CPUs 104 are weakly ordered -- even x86 CPUs allow later loads to be 105 reordered to precede earlier stores. RCU code must take all of 106 the following measures to prevent memory-corruption problems: 107 108 a. Readers must maintain proper ordering of their memory 109 accesses. The rcu_dereference() primitive ensures that 110 the CPU picks up the pointer before it picks up the data 111 that the pointer points to. This really is necessary 112 on Alpha CPUs. If you don't believe me, see: 113 114 http://www.openvms.compaq.com/wizard/wiz_2637.html 115 116 The rcu_dereference() primitive is also an excellent 117 documentation aid, letting the person reading the code 118 know exactly which pointers are protected by RCU. 119 Please note that compilers can also reorder code, and 120 they are becoming increasingly aggressive about doing 121 just that. The rcu_dereference() primitive therefore 122 also prevents destructive compiler optimizations. 123 124 The rcu_dereference() primitive is used by the 125 various "_rcu()" list-traversal primitives, such 126 as the list_for_each_entry_rcu(). Note that it is 127 perfectly legal (if redundant) for update-side code to 128 use rcu_dereference() and the "_rcu()" list-traversal 129 primitives. This is particularly useful in code that 130 is common to readers and updaters. However, lockdep 131 will complain if you access rcu_dereference() outside 132 of an RCU read-side critical section. See lockdep.txt 133 to learn what to do about this. 134 135 Of course, neither rcu_dereference() nor the "_rcu()" 136 list-traversal primitives can substitute for a good 137 concurrency design coordinating among multiple updaters. 138 139 b. If the list macros are being used, the list_add_tail_rcu() 140 and list_add_rcu() primitives must be used in order 141 to prevent weakly ordered machines from misordering 142 structure initialization and pointer planting. 143 Similarly, if the hlist macros are being used, the 144 hlist_add_head_rcu() primitive is required. 145 146 c. If the list macros are being used, the list_del_rcu() 147 primitive must be used to keep list_del()'s pointer 148 poisoning from inflicting toxic effects on concurrent 149 readers. Similarly, if the hlist macros are being used, 150 the hlist_del_rcu() primitive is required. 151 152 The list_replace_rcu() and hlist_replace_rcu() primitives 153 may be used to replace an old structure with a new one 154 in their respective types of RCU-protected lists. 155 156 d. Rules similar to (4b) and (4c) apply to the "hlist_nulls" 157 type of RCU-protected linked lists. 158 159 e. Updates must ensure that initialization of a given 160 structure happens before pointers to that structure are 161 publicized. Use the rcu_assign_pointer() primitive 162 when publicizing a pointer to a structure that can 163 be traversed by an RCU read-side critical section. 164 165 5. If call_rcu(), or a related primitive such as call_rcu_bh(), 166 call_rcu_sched(), or call_srcu() is used, the callback function 167 must be written to be called from softirq context. In particular, 168 it cannot block. 169 170 6. Since synchronize_rcu() can block, it cannot be called from 171 any sort of irq context. The same rule applies for 172 synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(), 173 synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), 174 synchronize_sched_expedite(), and synchronize_srcu_expedited(). 175 176 The expedited forms of these primitives have the same semantics 177 as the non-expedited forms, but expediting is both expensive 178 and unfriendly to real-time workloads. Use of the expedited 179 primitives should be restricted to rare configuration-change 180 operations that would not normally be undertaken while a real-time 181 workload is running. 182 183 In particular, if you find yourself invoking one of the expedited 184 primitives repeatedly in a loop, please do everyone a favor: 185 Restructure your code so that it batches the updates, allowing 186 a single non-expedited primitive to cover the entire batch. 187 This will very likely be faster than the loop containing the 188 expedited primitive, and will be much much easier on the rest 189 of the system, especially to real-time workloads running on 190 the rest of the system. 191 192 In addition, it is illegal to call the expedited forms from 193 a CPU-hotplug notifier, or while holding a lock that is acquired 194 by a CPU-hotplug notifier. Failing to observe this restriction 195 will result in deadlock. 196 197 7. If the updater uses call_rcu() or synchronize_rcu(), then the 198 corresponding readers must use rcu_read_lock() and 199 rcu_read_unlock(). If the updater uses call_rcu_bh() or 200 synchronize_rcu_bh(), then the corresponding readers must 201 use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the 202 updater uses call_rcu_sched() or synchronize_sched(), then 203 the corresponding readers must disable preemption, possibly 204 by calling rcu_read_lock_sched() and rcu_read_unlock_sched(). 205 If the updater uses synchronize_srcu() or call_srcu(), then 206 the corresponding readers must use srcu_read_lock() and 207 srcu_read_unlock(), and with the same srcu_struct. The rules for 208 the expedited primitives are the same as for their non-expedited 209 counterparts. Mixing things up will result in confusion and 210 broken kernels. 211 212 One exception to this rule: rcu_read_lock() and rcu_read_unlock() 213 may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh() 214 in cases where local bottom halves are already known to be 215 disabled, for example, in irq or softirq context. Commenting 216 such cases is a must, of course! And the jury is still out on 217 whether the increased speed is worth it. 218 219 8. Although synchronize_rcu() is slower than is call_rcu(), it 220 usually results in simpler code. So, unless update performance is 221 critically important, the updaters cannot block, or the latency of 222 synchronize_rcu() is visible from userspace, synchronize_rcu() 223 should be used in preference to call_rcu(). Furthermore, 224 kfree_rcu() usually results in even simpler code than does 225 synchronize_rcu() without synchronize_rcu()'s multi-millisecond 226 latency. So please take advantage of kfree_rcu()'s "fire and 227 forget" memory-freeing capabilities where it applies. 228 229 An especially important property of the synchronize_rcu() 230 primitive is that it automatically self-limits: if grace periods 231 are delayed for whatever reason, then the synchronize_rcu() 232 primitive will correspondingly delay updates. In contrast, 233 code using call_rcu() should explicitly limit update rate in 234 cases where grace periods are delayed, as failing to do so can 235 result in excessive realtime latencies or even OOM conditions. 236 237 Ways of gaining this self-limiting property when using call_rcu() 238 include: 239 240 a. Keeping a count of the number of data-structure elements 241 used by the RCU-protected data structure, including 242 those waiting for a grace period to elapse. Enforce a 243 limit on this number, stalling updates as needed to allow 244 previously deferred frees to complete. Alternatively, 245 limit only the number awaiting deferred free rather than 246 the total number of elements. 247 248 One way to stall the updates is to acquire the update-side 249 mutex. (Don't try this with a spinlock -- other CPUs 250 spinning on the lock could prevent the grace period 251 from ever ending.) Another way to stall the updates 252 is for the updates to use a wrapper function around 253 the memory allocator, so that this wrapper function 254 simulates OOM when there is too much memory awaiting an 255 RCU grace period. There are of course many other 256 variations on this theme. 257 258 b. Limiting update rate. For example, if updates occur only 259 once per hour, then no explicit rate limiting is 260 required, unless your system is already badly broken. 261 Older versions of the dcache subsystem take this approach, 262 guarding updates with a global lock, limiting their rate. 263 264 c. Trusted update -- if updates can only be done manually by 265 superuser or some other trusted user, then it might not 266 be necessary to automatically limit them. The theory 267 here is that superuser already has lots of ways to crash 268 the machine. 269 270 d. Use call_rcu_bh() rather than call_rcu(), in order to take 271 advantage of call_rcu_bh()'s faster grace periods. (This 272 is only a partial solution, though.) 273 274 e. Periodically invoke synchronize_rcu(), permitting a limited 275 number of updates per grace period. 276 277 The same cautions apply to call_rcu_bh(), call_rcu_sched(), 278 call_srcu(), and kfree_rcu(). 279 280 Note that although these primitives do take action to avoid memory 281 exhaustion when any given CPU has too many callbacks, a determined 282 user could still exhaust memory. This is especially the case 283 if a system with a large number of CPUs has been configured to 284 offload all of its RCU callbacks onto a single CPU, or if the 285 system has relatively little free memory. 286 287 9. All RCU list-traversal primitives, which include 288 rcu_dereference(), list_for_each_entry_rcu(), and 289 list_for_each_safe_rcu(), must be either within an RCU read-side 290 critical section or must be protected by appropriate update-side 291 locks. RCU read-side critical sections are delimited by 292 rcu_read_lock() and rcu_read_unlock(), or by similar primitives 293 such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which 294 case the matching rcu_dereference() primitive must be used in 295 order to keep lockdep happy, in this case, rcu_dereference_bh(). 296 297 The reason that it is permissible to use RCU list-traversal 298 primitives when the update-side lock is held is that doing so 299 can be quite helpful in reducing code bloat when common code is 300 shared between readers and updaters. Additional primitives 301 are provided for this case, as discussed in lockdep.txt. 302 303 10. Conversely, if you are in an RCU read-side critical section, 304 and you don't hold the appropriate update-side lock, you -must- 305 use the "_rcu()" variants of the list macros. Failing to do so 306 will break Alpha, cause aggressive compilers to generate bad code, 307 and confuse people trying to read your code. 308 309 11. Note that synchronize_rcu() -only- guarantees to wait until 310 all currently executing rcu_read_lock()-protected RCU read-side 311 critical sections complete. It does -not- necessarily guarantee 312 that all currently running interrupts, NMIs, preempt_disable() 313 code, or idle loops will complete. Therefore, if your 314 read-side critical sections are protected by something other 315 than rcu_read_lock(), do -not- use synchronize_rcu(). 316 317 Similarly, disabling preemption is not an acceptable substitute 318 for rcu_read_lock(). Code that attempts to use preemption 319 disabling where it should be using rcu_read_lock() will break 320 in real-time kernel builds. 321 322 If you want to wait for interrupt handlers, NMI handlers, and 323 code under the influence of preempt_disable(), you instead 324 need to use synchronize_irq() or synchronize_sched(). 325 326 This same limitation also applies to synchronize_rcu_bh() 327 and synchronize_srcu(), as well as to the asynchronous and 328 expedited forms of the three primitives, namely call_rcu(), 329 call_rcu_bh(), call_srcu(), synchronize_rcu_expedited(), 330 synchronize_rcu_bh_expedited(), and synchronize_srcu_expedited(). 331 332 12. Any lock acquired by an RCU callback must be acquired elsewhere 333 with softirq disabled, e.g., via spin_lock_irqsave(), 334 spin_lock_bh(), etc. Failing to disable irq on a given 335 acquisition of that lock will result in deadlock as soon as 336 the RCU softirq handler happens to run your RCU callback while 337 interrupting that acquisition's critical section. 338 339 13. RCU callbacks can be and are executed in parallel. In many cases, 340 the callback code simply wrappers around kfree(), so that this 341 is not an issue (or, more accurately, to the extent that it is 342 an issue, the memory-allocator locking handles it). However, 343 if the callbacks do manipulate a shared data structure, they 344 must use whatever locking or other synchronization is required 345 to safely access and/or modify that data structure. 346 347 RCU callbacks are -usually- executed on the same CPU that executed 348 the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(), 349 but are by -no- means guaranteed to be. For example, if a given 350 CPU goes offline while having an RCU callback pending, then that 351 RCU callback will execute on some surviving CPU. (If this was 352 not the case, a self-spawning RCU callback would prevent the 353 victim CPU from ever going offline.) 354 355 14. SRCU (srcu_read_lock(), srcu_read_unlock(), srcu_dereference(), 356 synchronize_srcu(), synchronize_srcu_expedited(), and call_srcu()) 357 may only be invoked from process context. Unlike other forms of 358 RCU, it -is- permissible to block in an SRCU read-side critical 359 section (demarked by srcu_read_lock() and srcu_read_unlock()), 360 hence the "SRCU": "sleepable RCU". Please note that if you 361 don't need to sleep in read-side critical sections, you should be 362 using RCU rather than SRCU, because RCU is almost always faster 363 and easier to use than is SRCU. 364 365 Also unlike other forms of RCU, explicit initialization 366 and cleanup is required via init_srcu_struct() and 367 cleanup_srcu_struct(). These are passed a "struct srcu_struct" 368 that defines the scope of a given SRCU domain. Once initialized, 369 the srcu_struct is passed to srcu_read_lock(), srcu_read_unlock() 370 synchronize_srcu(), synchronize_srcu_expedited(), and call_srcu(). 371 A given synchronize_srcu() waits only for SRCU read-side critical 372 sections governed by srcu_read_lock() and srcu_read_unlock() 373 calls that have been passed the same srcu_struct. This property 374 is what makes sleeping read-side critical sections tolerable -- 375 a given subsystem delays only its own updates, not those of other 376 subsystems using SRCU. Therefore, SRCU is less prone to OOM the 377 system than RCU would be if RCU's read-side critical sections 378 were permitted to sleep. 379 380 The ability to sleep in read-side critical sections does not 381 come for free. First, corresponding srcu_read_lock() and 382 srcu_read_unlock() calls must be passed the same srcu_struct. 383 Second, grace-period-detection overhead is amortized only 384 over those updates sharing a given srcu_struct, rather than 385 being globally amortized as they are for other forms of RCU. 386 Therefore, SRCU should be used in preference to rw_semaphore 387 only in extremely read-intensive situations, or in situations 388 requiring SRCU's read-side deadlock immunity or low read-side 389 realtime latency. 390 391 Note that, rcu_assign_pointer() relates to SRCU just as it does 392 to other forms of RCU. 393 394 15. The whole point of call_rcu(), synchronize_rcu(), and friends 395 is to wait until all pre-existing readers have finished before 396 carrying out some otherwise-destructive operation. It is 397 therefore critically important to -first- remove any path 398 that readers can follow that could be affected by the 399 destructive operation, and -only- -then- invoke call_rcu(), 400 synchronize_rcu(), or friends. 401 402 Because these primitives only wait for pre-existing readers, it 403 is the caller's responsibility to guarantee that any subsequent 404 readers will execute safely. 405 406 16. The various RCU read-side primitives do -not- necessarily contain 407 memory barriers. You should therefore plan for the CPU 408 and the compiler to freely reorder code into and out of RCU 409 read-side critical sections. It is the responsibility of the 410 RCU update-side primitives to deal with this. 411 412 17. Use CONFIG_PROVE_RCU, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the 413 __rcu sparse checks (enabled by CONFIG_SPARSE_RCU_POINTER) to 414 validate your RCU code. These can help find problems as follows: 415 416 CONFIG_PROVE_RCU: check that accesses to RCU-protected data 417 structures are carried out under the proper RCU 418 read-side critical section, while holding the right 419 combination of locks, or whatever other conditions 420 are appropriate. 421 422 CONFIG_DEBUG_OBJECTS_RCU_HEAD: check that you don't pass the 423 same object to call_rcu() (or friends) before an RCU 424 grace period has elapsed since the last time that you 425 passed that same object to call_rcu() (or friends). 426 427 __rcu sparse checks: tag the pointer to the RCU-protected data 428 structure with __rcu, and sparse will warn you if you 429 access that pointer without the services of one of the 430 variants of rcu_dereference(). 431 432 These debugging aids can help you find problems that are 433 otherwise extremely difficult to spot.