Based on kernel version 4.2. Page generated on 2015-09-09 12:08 EST.
1 2 Cgroup unified hierarchy 3 4 April, 2014 Tejun Heo <email@example.com> 5 6 This document describes the changes made by unified hierarchy and 7 their rationales. It will eventually be merged into the main cgroup 8 documentation. 9 10 CONTENTS 11 12 1. Background 13 2. Basic Operation 14 2-1. Mounting 15 2-2. cgroup.subtree_control 16 2-3. cgroup.controllers 17 3. Structural Constraints 18 3-1. Top-down 19 3-2. No internal tasks 20 4. Delegation 21 4-1. Model of delegation 22 4-2. Common ancestor rule 23 5. Other Changes 24 5-1. [Un]populated Notification 25 5-2. Other Core Changes 26 5-3. Per-Controller Changes 27 5-3-1. blkio 28 5-3-2. cpuset 29 5-3-3. memory 30 6. Planned Changes 31 6-1. CAP for resource control 32 33 34 1. Background 35 36 cgroup allows an arbitrary number of hierarchies and each hierarchy 37 can host any number of controllers. While this seems to provide a 38 high level of flexibility, it isn't quite useful in practice. 39 40 For example, as there is only one instance of each controller, utility 41 type controllers such as freezer which can be useful in all 42 hierarchies can only be used in one. The issue is exacerbated by the 43 fact that controllers can't be moved around once hierarchies are 44 populated. Another issue is that all controllers bound to a hierarchy 45 are forced to have exactly the same view of the hierarchy. It isn't 46 possible to vary the granularity depending on the specific controller. 47 48 In practice, these issues heavily limit which controllers can be put 49 on the same hierarchy and most configurations resort to putting each 50 controller on its own hierarchy. Only closely related ones, such as 51 the cpu and cpuacct controllers, make sense to put on the same 52 hierarchy. This often means that userland ends up managing multiple 53 similar hierarchies repeating the same steps on each hierarchy 54 whenever a hierarchy management operation is necessary. 55 56 Unfortunately, support for multiple hierarchies comes at a steep cost. 57 Internal implementation in cgroup core proper is dazzlingly 58 complicated but more importantly the support for multiple hierarchies 59 restricts how cgroup is used in general and what controllers can do. 60 61 There's no limit on how many hierarchies there may be, which means 62 that a task's cgroup membership can't be described in finite length. 63 The key may contain any varying number of entries and is unlimited in 64 length, which makes it highly awkward to handle and leads to addition 65 of controllers which exist only to identify membership, which in turn 66 exacerbates the original problem. 67 68 Also, as a controller can't have any expectation regarding what shape 69 of hierarchies other controllers would be on, each controller has to 70 assume that all other controllers are operating on completely 71 orthogonal hierarchies. This makes it impossible, or at least very 72 cumbersome, for controllers to cooperate with each other. 73 74 In most use cases, putting controllers on hierarchies which are 75 completely orthogonal to each other isn't necessary. What usually is 76 called for is the ability to have differing levels of granularity 77 depending on the specific controller. In other words, hierarchy may 78 be collapsed from leaf towards root when viewed from specific 79 controllers. For example, a given configuration might not care about 80 how memory is distributed beyond a certain level while still wanting 81 to control how CPU cycles are distributed. 82 83 Unified hierarchy is the next version of cgroup interface. It aims to 84 address the aforementioned issues by having more structure while 85 retaining enough flexibility for most use cases. Various other 86 general and controller-specific interface issues are also addressed in 87 the process. 88 89 90 2. Basic Operation 91 92 2-1. Mounting 93 94 Currently, unified hierarchy can be mounted with the following mount 95 command. Note that this is still under development and scheduled to 96 change soon. 97 98 mount -t cgroup -o __DEVEL__sane_behavior cgroup $MOUNT_POINT 99 100 All controllers which support the unified hierarchy and are not bound 101 to other hierarchies are automatically bound to unified hierarchy and 102 show up at the root of it. Controllers which are enabled only in the 103 root of unified hierarchy can be bound to other hierarchies. This 104 allows mixing unified hierarchy with the traditional multiple 105 hierarchies in a fully backward compatible way. 106 107 For development purposes, the following boot parameter makes all 108 controllers to appear on the unified hierarchy whether supported or 109 not. 110 111 cgroup__DEVEL__legacy_files_on_dfl 112 113 A controller can be moved across hierarchies only after the controller 114 is no longer referenced in its current hierarchy. Because per-cgroup 115 controller states are destroyed asynchronously and controllers may 116 have lingering references, a controller may not show up immediately on 117 the unified hierarchy after the final umount of the previous 118 hierarchy. Similarly, a controller should be fully disabled to be 119 moved out of the unified hierarchy and it may take some time for the 120 disabled controller to become available for other hierarchies; 121 furthermore, due to dependencies among controllers, other controllers 122 may need to be disabled too. 123 124 While useful for development and manual configurations, dynamically 125 moving controllers between the unified and other hierarchies is 126 strongly discouraged for production use. It is recommended to decide 127 the hierarchies and controller associations before starting using the 128 controllers. 129 130 131 2-2. cgroup.subtree_control 132 133 All cgroups on unified hierarchy have a "cgroup.subtree_control" file 134 which governs which controllers are enabled on the children of the 135 cgroup. Let's assume a hierarchy like the following. 136 137 root - A - B - C 138 \ D 139 140 root's "cgroup.subtree_control" file determines which controllers are 141 enabled on A. A's on B. B's on C and D. This coincides with the 142 fact that controllers on the immediate sub-level are used to 143 distribute the resources of the parent. In fact, it's natural to 144 assume that resource control knobs of a child belong to its parent. 145 Enabling a controller in a "cgroup.subtree_control" file declares that 146 distribution of the respective resources of the cgroup will be 147 controlled. Note that this means that controller enable states are 148 shared among siblings. 149 150 When read, the file contains a space-separated list of currently 151 enabled controllers. A write to the file should contain a 152 space-separated list of controllers with '+' or '-' prefixed (without 153 the quotes). Controllers prefixed with '+' are enabled and '-' 154 disabled. If a controller is listed multiple times, the last entry 155 wins. The specific operations are executed atomically - either all 156 succeed or fail. 157 158 159 2-3. cgroup.controllers 160 161 Read-only "cgroup.controllers" file contains a space-separated list of 162 controllers which can be enabled in the cgroup's 163 "cgroup.subtree_control" file. 164 165 In the root cgroup, this lists controllers which are not bound to 166 other hierarchies and the content changes as controllers are bound to 167 and unbound from other hierarchies. 168 169 In non-root cgroups, the content of this file equals that of the 170 parent's "cgroup.subtree_control" file as only controllers enabled 171 from the parent can be used in its children. 172 173 174 3. Structural Constraints 175 176 3-1. Top-down 177 178 As it doesn't make sense to nest control of an uncontrolled resource, 179 all non-root "cgroup.subtree_control" files can only contain 180 controllers which are enabled in the parent's "cgroup.subtree_control" 181 file. A controller can be enabled only if the parent has the 182 controller enabled and a controller can't be disabled if one or more 183 children have it enabled. 184 185 186 3-2. No internal tasks 187 188 One long-standing issue that cgroup faces is the competition between 189 tasks belonging to the parent cgroup and its children cgroups. This 190 is inherently nasty as two different types of entities compete and 191 there is no agreed-upon obvious way to handle it. Different 192 controllers are doing different things. 193 194 The cpu controller considers tasks and cgroups as equivalents and maps 195 nice levels to cgroup weights. This works for some cases but falls 196 flat when children should be allocated specific ratios of CPU cycles 197 and the number of internal tasks fluctuates - the ratios constantly 198 change as the number of competing entities fluctuates. There also are 199 other issues. The mapping from nice level to weight isn't obvious or 200 universal, and there are various other knobs which simply aren't 201 available for tasks. 202 203 The blkio controller implicitly creates a hidden leaf node for each 204 cgroup to host the tasks. The hidden leaf has its own copies of all 205 the knobs with "leaf_" prefixed. While this allows equivalent control 206 over internal tasks, it's with serious drawbacks. It always adds an 207 extra layer of nesting which may not be necessary, makes the interface 208 messy and significantly complicates the implementation. 209 210 The memory controller currently doesn't have a way to control what 211 happens between internal tasks and child cgroups and the behavior is 212 not clearly defined. There have been attempts to add ad-hoc behaviors 213 and knobs to tailor the behavior to specific workloads. Continuing 214 this direction will lead to problems which will be extremely difficult 215 to resolve in the long term. 216 217 Multiple controllers struggle with internal tasks and came up with 218 different ways to deal with it; unfortunately, all the approaches in 219 use now are severely flawed and, furthermore, the widely different 220 behaviors make cgroup as whole highly inconsistent. 221 222 It is clear that this is something which needs to be addressed from 223 cgroup core proper in a uniform way so that controllers don't need to 224 worry about it and cgroup as a whole shows a consistent and logical 225 behavior. To achieve that, unified hierarchy enforces the following 226 structural constraint: 227 228 Except for the root, only cgroups which don't contain any task may 229 have controllers enabled in their "cgroup.subtree_control" files. 230 231 Combined with other properties, this guarantees that, when a 232 controller is looking at the part of the hierarchy which has it 233 enabled, tasks are always only on the leaves. This rules out 234 situations where child cgroups compete against internal tasks of the 235 parent. 236 237 There are two things to note. Firstly, the root cgroup is exempt from 238 the restriction. Root contains tasks and anonymous resource 239 consumption which can't be associated with any other cgroup and 240 requires special treatment from most controllers. How resource 241 consumption in the root cgroup is governed is up to each controller. 242 243 Secondly, the restriction doesn't take effect if there is no enabled 244 controller in the cgroup's "cgroup.subtree_control" file. This is 245 important as otherwise it wouldn't be possible to create children of a 246 populated cgroup. To control resource distribution of a cgroup, the 247 cgroup must create children and transfer all its tasks to the children 248 before enabling controllers in its "cgroup.subtree_control" file. 249 250 251 4. Delegation 252 253 4-1. Model of delegation 254 255 A cgroup can be delegated to a less privileged user by granting write 256 access of the directory and its "cgroup.procs" file to the user. Note 257 that the resource control knobs in a given directory concern the 258 resources of the parent and thus must not be delegated along with the 259 directory. 260 261 Once delegated, the user can build sub-hierarchy under the directory, 262 organize processes as it sees fit and further distribute the resources 263 it got from the parent. The limits and other settings of all resource 264 controllers are hierarchical and regardless of what happens in the 265 delegated sub-hierarchy, nothing can escape the resource restrictions 266 imposed by the parent. 267 268 Currently, cgroup doesn't impose any restrictions on the number of 269 cgroups in or nesting depth of a delegated sub-hierarchy; however, 270 this may in the future be limited explicitly. 271 272 273 4-2. Common ancestor rule 274 275 On the unified hierarchy, to write to a "cgroup.procs" file, in 276 addition to the usual write permission to the file and uid match, the 277 writer must also have write access to the "cgroup.procs" file of the 278 common ancestor of the source and destination cgroups. This prevents 279 delegatees from smuggling processes across disjoint sub-hierarchies. 280 281 Let's say cgroups C0 and C1 have been delegated to user U0 who created 282 C00, C01 under C0 and C10 under C1 as follows. 283 284 ~~~~~~~~~~~~~ - C0 - C00 285 ~ cgroup ~ \ C01 286 ~ hierarchy ~ 287 ~~~~~~~~~~~~~ - C1 - C10 288 289 C0 and C1 are separate entities in terms of resource distribution 290 regardless of their relative positions in the hierarchy. The 291 resources the processes under C0 are entitled to are controlled by 292 C0's ancestors and may be completely different from C1. It's clear 293 that the intention of delegating C0 to U0 is allowing U0 to organize 294 the processes under C0 and further control the distribution of C0's 295 resources. 296 297 On traditional hierarchies, if a task has write access to "tasks" or 298 "cgroup.procs" file of a cgroup and its uid agrees with the target, it 299 can move the target to the cgroup. In the above example, U0 will not 300 only be able to move processes in each sub-hierarchy but also across 301 the two sub-hierarchies, effectively allowing it to violate the 302 organizational and resource restrictions implied by the hierarchical 303 structure above C0 and C1. 304 305 On the unified hierarchy, let's say U0 wants to write the pid of a 306 process which has a matching uid and is currently in C10 into 307 "C00/cgroup.procs". U0 obviously has write access to the file and 308 migration permission on the process; however, the common ancestor of 309 the source cgroup C10 and the destination cgroup C00 is above the 310 points of delegation and U0 would not have write access to its 311 "cgroup.procs" and thus be denied with -EACCES. 312 313 314 5. Other Changes 315 316 5-1. [Un]populated Notification 317 318 cgroup users often need a way to determine when a cgroup's 319 subhierarchy becomes empty so that it can be cleaned up. cgroup 320 currently provides release_agent for it; unfortunately, this mechanism 321 is riddled with issues. 322 323 - It delivers events by forking and execing a userland binary 324 specified as the release_agent. This is a long deprecated method of 325 notification delivery. It's extremely heavy, slow and cumbersome to 326 integrate with larger infrastructure. 327 328 - There is single monitoring point at the root. There's no way to 329 delegate management of a subtree. 330 331 - The event isn't recursive. It triggers when a cgroup doesn't have 332 any tasks or child cgroups. Events for internal nodes trigger only 333 after all children are removed. This again makes it impossible to 334 delegate management of a subtree. 335 336 - Events are filtered from the kernel side. A "notify_on_release" 337 file is used to subscribe to or suppress release events. This is 338 unnecessarily complicated and probably done this way because event 339 delivery itself was expensive. 340 341 Unified hierarchy implements an interface file "cgroup.populated" 342 which can be used to monitor whether the cgroup's subhierarchy has 343 tasks in it or not. Its value is 0 if there is no task in the cgroup 344 and its descendants; otherwise, 1. poll and [id]notify events are 345 triggered when the value changes. 346 347 This is significantly lighter and simpler and trivially allows 348 delegating management of subhierarchy - subhierarchy monitoring can 349 block further propagation simply by putting itself or another process 350 in the subhierarchy and monitor events that it's interested in from 351 there without interfering with monitoring higher in the tree. 352 353 In unified hierarchy, the release_agent mechanism is no longer 354 supported and the interface files "release_agent" and 355 "notify_on_release" do not exist. 356 357 358 5-2. Other Core Changes 359 360 - None of the mount options is allowed. 361 362 - remount is disallowed. 363 364 - rename(2) is disallowed. 365 366 - The "tasks" file is removed. Everything should at process 367 granularity. Use the "cgroup.procs" file instead. 368 369 - The "cgroup.procs" file is not sorted. pids will be unique unless 370 they got recycled in-between reads. 371 372 - The "cgroup.clone_children" file is removed. 373 374 375 5-3. Per-Controller Changes 376 377 5-3-1. blkio 378 379 - blk-throttle becomes properly hierarchical. 380 381 382 5-3-2. cpuset 383 384 - Tasks are kept in empty cpusets after hotplug and take on the masks 385 of the nearest non-empty ancestor, instead of being moved to it. 386 387 - A task can be moved into an empty cpuset, and again it takes on the 388 masks of the nearest non-empty ancestor. 389 390 391 5-3-3. memory 392 393 - use_hierarchy is on by default and the cgroup file for the flag is 394 not created. 395 396 - The original lower boundary, the soft limit, is defined as a limit 397 that is per default unset. As a result, the set of cgroups that 398 global reclaim prefers is opt-in, rather than opt-out. The costs 399 for optimizing these mostly negative lookups are so high that the 400 implementation, despite its enormous size, does not even provide the 401 basic desirable behavior. First off, the soft limit has no 402 hierarchical meaning. All configured groups are organized in a 403 global rbtree and treated like equal peers, regardless where they 404 are located in the hierarchy. This makes subtree delegation 405 impossible. Second, the soft limit reclaim pass is so aggressive 406 that it not just introduces high allocation latencies into the 407 system, but also impacts system performance due to overreclaim, to 408 the point where the feature becomes self-defeating. 409 410 The memory.low boundary on the other hand is a top-down allocated 411 reserve. A cgroup enjoys reclaim protection when it and all its 412 ancestors are below their low boundaries, which makes delegation of 413 subtrees possible. Secondly, new cgroups have no reserve per 414 default and in the common case most cgroups are eligible for the 415 preferred reclaim pass. This allows the new low boundary to be 416 efficiently implemented with just a minor addition to the generic 417 reclaim code, without the need for out-of-band data structures and 418 reclaim passes. Because the generic reclaim code considers all 419 cgroups except for the ones running low in the preferred first 420 reclaim pass, overreclaim of individual groups is eliminated as 421 well, resulting in much better overall workload performance. 422 423 - The original high boundary, the hard limit, is defined as a strict 424 limit that can not budge, even if the OOM killer has to be called. 425 But this generally goes against the goal of making the most out of 426 the available memory. The memory consumption of workloads varies 427 during runtime, and that requires users to overcommit. But doing 428 that with a strict upper limit requires either a fairly accurate 429 prediction of the working set size or adding slack to the limit. 430 Since working set size estimation is hard and error prone, and 431 getting it wrong results in OOM kills, most users tend to err on the 432 side of a looser limit and end up wasting precious resources. 433 434 The memory.high boundary on the other hand can be set much more 435 conservatively. When hit, it throttles allocations by forcing them 436 into direct reclaim to work off the excess, but it never invokes the 437 OOM killer. As a result, a high boundary that is chosen too 438 aggressively will not terminate the processes, but instead it will 439 lead to gradual performance degradation. The user can monitor this 440 and make corrections until the minimal memory footprint that still 441 gives acceptable performance is found. 442 443 In extreme cases, with many concurrent allocations and a complete 444 breakdown of reclaim progress within the group, the high boundary 445 can be exceeded. But even then it's mostly better to satisfy the 446 allocation from the slack available in other groups or the rest of 447 the system than killing the group. Otherwise, memory.max is there 448 to limit this type of spillover and ultimately contain buggy or even 449 malicious applications. 450 451 - The original control file names are unwieldy and inconsistent in 452 many different ways. For example, the upper boundary hit count is 453 exported in the memory.failcnt file, but an OOM event count has to 454 be manually counted by listening to memory.oom_control events, and 455 lower boundary / soft limit events have to be counted by first 456 setting a threshold for that value and then counting those events. 457 Also, usage and limit files encode their units in the filename. 458 That makes the filenames very long, even though this is not 459 information that a user needs to be reminded of every time they type 460 out those names. 461 462 To address these naming issues, as well as to signal clearly that 463 the new interface carries a new configuration model, the naming 464 conventions in it necessarily differ from the old interface. 465 466 - The original limit files indicate the state of an unset limit with a 467 Very High Number, and a configured limit can be unset by echoing -1 468 into those files. But that very high number is implementation and 469 architecture dependent and not very descriptive. And while -1 can 470 be understood as an underflow into the highest possible value, -2 or 471 -10M etc. do not work, so it's not consistent. 472 473 memory.low, memory.high, and memory.max will use the string "max" to 474 indicate and set the highest possible value. 475 476 6. Planned Changes 477 478 6-1. CAP for resource control 479 480 Unified hierarchy will require one of the capabilities(7), which is 481 yet to be decided, for all resource control related knobs. Process 482 organization operations - creation of sub-cgroups and migration of 483 processes in sub-hierarchies may be delegated by changing the 484 ownership and/or permissions on the cgroup directory and 485 "cgroup.procs" interface file; however, all operations which affect 486 resource control - writes to a "cgroup.subtree_control" file or any 487 controller-specific knobs - will require an explicit CAP privilege. 488 489 This, in part, is to prevent the cgroup interface from being 490 inadvertently promoted to programmable API used by non-privileged 491 binaries. cgroup exposes various aspects of the system in ways which 492 aren't properly abstracted for direct consumption by regular programs. 493 This is an administration interface much closer to sysctl knobs than 494 system calls. Even the basic access model, being filesystem path 495 based, isn't suitable for direct consumption. There's no way to 496 access "my cgroup" in a race-free way or make multiple operations 497 atomic against migration to another cgroup. 498 499 Another aspect is that, for better or for worse, the cgroup interface 500 goes through far less scrutiny than regular interfaces for 501 unprivileged userland. The upside is that cgroup is able to expose 502 useful features which may not be suitable for general consumption in a 503 reasonable time frame. It provides a relatively short path between 504 internal details and userland-visible interface. Of course, this 505 shortcut comes with high risk. We go through what we go through for 506 general kernel APIs for good reasons. It may end up leaking internal 507 details in a way which can exert significant pain by locking the 508 kernel into a contract that can't be maintained in a reasonable 509 manner. 510 511 Also, due to the specific nature, cgroup and its controllers don't 512 tend to attract attention from a wide scope of developers. cgroup's 513 short history is already fraught with severely mis-designed 514 interfaces, unnecessary commitments to and exposing of internal 515 details, broken and dangerous implementations of various features. 516 517 Keeping cgroup as an administration interface is both advantageous for 518 its role and imperative given its nature. Some of the cgroup features 519 may make sense for unprivileged access. If deemed justified, those 520 must be further abstracted and implemented as a different interface, 521 be it a system call or process-private filesystem, and survive through 522 the scrutiny that any interface for general consumption is required to 523 go through. 524 525 Requiring CAP is not a complete solution but should serve as a 526 significant deterrent against spraying cgroup usages in non-privileged 527 programs.