About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / scheduler / sched-deadline.txt




Custom Search

Based on kernel version 4.9. Page generated on 2016-12-21 14:36 EST.

1				  Deadline Task Scheduling
2				  ------------------------
3	
4	CONTENTS
5	========
6	
7	 0. WARNING
8	 1. Overview
9	 2. Scheduling algorithm
10	 3. Scheduling Real-Time Tasks
11	   3.1 Definitions
12	   3.2 Schedulability Analysis for Uniprocessor Systems
13	   3.3 Schedulability Analysis for Multiprocessor Systems
14	   3.4 Relationship with SCHED_DEADLINE Parameters
15	 4. Bandwidth management
16	   4.1 System-wide settings
17	   4.2 Task interface
18	   4.3 Default behavior
19	   4.4 Behavior of sched_yield()
20	 5. Tasks CPU affinity
21	   5.1 SCHED_DEADLINE and cpusets HOWTO
22	 6. Future plans
23	 A. Test suite
24	 B. Minimal main()
25	
26	
27	0. WARNING
28	==========
29	
30	 Fiddling with these settings can result in an unpredictable or even unstable
31	 system behavior. As for -rt (group) scheduling, it is assumed that root users
32	 know what they're doing.
33	
34	
35	1. Overview
36	===========
37	
38	 The SCHED_DEADLINE policy contained inside the sched_dl scheduling class is
39	 basically an implementation of the Earliest Deadline First (EDF) scheduling
40	 algorithm, augmented with a mechanism (called Constant Bandwidth Server, CBS)
41	 that makes it possible to isolate the behavior of tasks between each other.
42	
43	
44	2. Scheduling algorithm
45	==================
46	
47	 SCHED_DEADLINE uses three parameters, named "runtime", "period", and
48	 "deadline", to schedule tasks. A SCHED_DEADLINE task should receive
49	 "runtime" microseconds of execution time every "period" microseconds, and
50	 these "runtime" microseconds are available within "deadline" microseconds
51	 from the beginning of the period.  In order to implement this behavior,
52	 every time the task wakes up, the scheduler computes a "scheduling deadline"
53	 consistent with the guarantee (using the CBS[2,3] algorithm). Tasks are then
54	 scheduled using EDF[1] on these scheduling deadlines (the task with the
55	 earliest scheduling deadline is selected for execution). Notice that the
56	 task actually receives "runtime" time units within "deadline" if a proper
57	 "admission control" strategy (see Section "4. Bandwidth management") is used
58	 (clearly, if the system is overloaded this guarantee cannot be respected).
59	
60	 Summing up, the CBS[2,3] algorithm assigns scheduling deadlines to tasks so
61	 that each task runs for at most its runtime every period, avoiding any
62	 interference between different tasks (bandwidth isolation), while the EDF[1]
63	 algorithm selects the task with the earliest scheduling deadline as the one
64	 to be executed next. Thanks to this feature, tasks that do not strictly comply
65	 with the "traditional" real-time task model (see Section 3) can effectively
66	 use the new policy.
67	
68	 In more details, the CBS algorithm assigns scheduling deadlines to
69	 tasks in the following way:
70	
71	  - Each SCHED_DEADLINE task is characterized by the "runtime",
72	    "deadline", and "period" parameters;
73	
74	  - The state of the task is described by a "scheduling deadline", and
75	    a "remaining runtime". These two parameters are initially set to 0;
76	
77	  - When a SCHED_DEADLINE task wakes up (becomes ready for execution),
78	    the scheduler checks if
79	
80	                 remaining runtime                  runtime
81	        ----------------------------------    >    ---------
82	        scheduling deadline - current time           period
83	
84	    then, if the scheduling deadline is smaller than the current time, or
85	    this condition is verified, the scheduling deadline and the
86	    remaining runtime are re-initialized as
87	
88	         scheduling deadline = current time + deadline
89	         remaining runtime = runtime
90	
91	    otherwise, the scheduling deadline and the remaining runtime are
92	    left unchanged;
93	
94	  - When a SCHED_DEADLINE task executes for an amount of time t, its
95	    remaining runtime is decreased as
96	
97	         remaining runtime = remaining runtime - t
98	
99	    (technically, the runtime is decreased at every tick, or when the
100	    task is descheduled / preempted);
101	
102	  - When the remaining runtime becomes less or equal than 0, the task is
103	    said to be "throttled" (also known as "depleted" in real-time literature)
104	    and cannot be scheduled until its scheduling deadline. The "replenishment
105	    time" for this task (see next item) is set to be equal to the current
106	    value of the scheduling deadline;
107	
108	  - When the current time is equal to the replenishment time of a
109	    throttled task, the scheduling deadline and the remaining runtime are
110	    updated as
111	
112	         scheduling deadline = scheduling deadline + period
113	         remaining runtime = remaining runtime + runtime
114	
115	
116	3. Scheduling Real-Time Tasks
117	=============================
118	
119	 * BIG FAT WARNING ******************************************************
120	 *
121	 * This section contains a (not-thorough) summary on classical deadline
122	 * scheduling theory, and how it applies to SCHED_DEADLINE.
123	 * The reader can "safely" skip to Section 4 if only interested in seeing
124	 * how the scheduling policy can be used. Anyway, we strongly recommend
125	 * to come back here and continue reading (once the urge for testing is
126	 * satisfied :P) to be sure of fully understanding all technical details.
127	 ************************************************************************
128	
129	 There are no limitations on what kind of task can exploit this new
130	 scheduling discipline, even if it must be said that it is particularly
131	 suited for periodic or sporadic real-time tasks that need guarantees on their
132	 timing behavior, e.g., multimedia, streaming, control applications, etc.
133	
134	3.1 Definitions
135	------------------------
136	
137	 A typical real-time task is composed of a repetition of computation phases
138	 (task instances, or jobs) which are activated on a periodic or sporadic
139	 fashion.
140	 Each job J_j (where J_j is the j^th job of the task) is characterized by an
141	 arrival time r_j (the time when the job starts), an amount of computation
142	 time c_j needed to finish the job, and a job absolute deadline d_j, which
143	 is the time within which the job should be finished. The maximum execution
144	 time max{c_j} is called "Worst Case Execution Time" (WCET) for the task.
145	 A real-time task can be periodic with period P if r_{j+1} = r_j + P, or
146	 sporadic with minimum inter-arrival time P is r_{j+1} >= r_j + P. Finally,
147	 d_j = r_j + D, where D is the task's relative deadline.
148	 Summing up, a real-time task can be described as
149		Task = (WCET, D, P)
150	
151	 The utilization of a real-time task is defined as the ratio between its
152	 WCET and its period (or minimum inter-arrival time), and represents
153	 the fraction of CPU time needed to execute the task.
154	
155	 If the total utilization U=sum(WCET_i/P_i) is larger than M (with M equal
156	 to the number of CPUs), then the scheduler is unable to respect all the
157	 deadlines.
158	 Note that total utilization is defined as the sum of the utilizations
159	 WCET_i/P_i over all the real-time tasks in the system. When considering
160	 multiple real-time tasks, the parameters of the i-th task are indicated
161	 with the "_i" suffix.
162	 Moreover, if the total utilization is larger than M, then we risk starving
163	 non- real-time tasks by real-time tasks.
164	 If, instead, the total utilization is smaller than M, then non real-time
165	 tasks will not be starved and the system might be able to respect all the
166	 deadlines.
167	 As a matter of fact, in this case it is possible to provide an upper bound
168	 for tardiness (defined as the maximum between 0 and the difference
169	 between the finishing time of a job and its absolute deadline).
170	 More precisely, it can be proven that using a global EDF scheduler the
171	 maximum tardiness of each task is smaller or equal than
172		((M − 1) · WCET_max − WCET_min)/(M − (M − 2) · U_max) + WCET_max
173	 where WCET_max = max{WCET_i} is the maximum WCET, WCET_min=min{WCET_i}
174	 is the minimum WCET, and U_max = max{WCET_i/P_i} is the maximum
175	 utilization[12].
176	
177	3.2 Schedulability Analysis for Uniprocessor Systems
178	------------------------
179	
180	 If M=1 (uniprocessor system), or in case of partitioned scheduling (each
181	 real-time task is statically assigned to one and only one CPU), it is
182	 possible to formally check if all the deadlines are respected.
183	 If D_i = P_i for all tasks, then EDF is able to respect all the deadlines
184	 of all the tasks executing on a CPU if and only if the total utilization
185	 of the tasks running on such a CPU is smaller or equal than 1.
186	 If D_i != P_i for some task, then it is possible to define the density of
187	 a task as WCET_i/min{D_i,P_i}, and EDF is able to respect all the deadlines
188	 of all the tasks running on a CPU if the sum of the densities of the tasks
189	 running on such a CPU is smaller or equal than 1:
190		sum(WCET_i / min{D_i, P_i}) <= 1
191	 It is important to notice that this condition is only sufficient, and not
192	 necessary: there are task sets that are schedulable, but do not respect the
193	 condition. For example, consider the task set {Task_1,Task_2} composed by
194	 Task_1=(50ms,50ms,100ms) and Task_2=(10ms,100ms,100ms).
195	 EDF is clearly able to schedule the two tasks without missing any deadline
196	 (Task_1 is scheduled as soon as it is released, and finishes just in time
197	 to respect its deadline; Task_2 is scheduled immediately after Task_1, hence
198	 its response time cannot be larger than 50ms + 10ms = 60ms) even if
199		50 / min{50,100} + 10 / min{100, 100} = 50 / 50 + 10 / 100 = 1.1
200	 Of course it is possible to test the exact schedulability of tasks with
201	 D_i != P_i (checking a condition that is both sufficient and necessary),
202	 but this cannot be done by comparing the total utilization or density with
203	 a constant. Instead, the so called "processor demand" approach can be used,
204	 computing the total amount of CPU time h(t) needed by all the tasks to
205	 respect all of their deadlines in a time interval of size t, and comparing
206	 such a time with the interval size t. If h(t) is smaller than t (that is,
207	 the amount of time needed by the tasks in a time interval of size t is
208	 smaller than the size of the interval) for all the possible values of t, then
209	 EDF is able to schedule the tasks respecting all of their deadlines. Since
210	 performing this check for all possible values of t is impossible, it has been
211	 proven[4,5,6] that it is sufficient to perform the test for values of t
212	 between 0 and a maximum value L. The cited papers contain all of the
213	 mathematical details and explain how to compute h(t) and L.
214	 In any case, this kind of analysis is too complex as well as too
215	 time-consuming to be performed on-line. Hence, as explained in Section
216	 4 Linux uses an admission test based on the tasks' utilizations.
217	
218	3.3 Schedulability Analysis for Multiprocessor Systems
219	------------------------
220	
221	 On multiprocessor systems with global EDF scheduling (non partitioned
222	 systems), a sufficient test for schedulability can not be based on the
223	 utilizations or densities: it can be shown that even if D_i = P_i task
224	 sets with utilizations slightly larger than 1 can miss deadlines regardless
225	 of the number of CPUs.
226	
227	 Consider a set {Task_1,...Task_{M+1}} of M+1 tasks on a system with M
228	 CPUs, with the first task Task_1=(P,P,P) having period, relative deadline
229	 and WCET equal to P. The remaining M tasks Task_i=(e,P-1,P-1) have an
230	 arbitrarily small worst case execution time (indicated as "e" here) and a
231	 period smaller than the one of the first task. Hence, if all the tasks
232	 activate at the same time t, global EDF schedules these M tasks first
233	 (because their absolute deadlines are equal to t + P - 1, hence they are
234	 smaller than the absolute deadline of Task_1, which is t + P). As a
235	 result, Task_1 can be scheduled only at time t + e, and will finish at
236	 time t + e + P, after its absolute deadline. The total utilization of the
237	 task set is U = M · e / (P - 1) + P / P = M · e / (P - 1) + 1, and for small
238	 values of e this can become very close to 1. This is known as "Dhall's
239	 effect"[7]. Note: the example in the original paper by Dhall has been
240	 slightly simplified here (for example, Dhall more correctly computed
241	 lim_{e->0}U).
242	
243	 More complex schedulability tests for global EDF have been developed in
244	 real-time literature[8,9], but they are not based on a simple comparison
245	 between total utilization (or density) and a fixed constant. If all tasks
246	 have D_i = P_i, a sufficient schedulability condition can be expressed in
247	 a simple way:
248		sum(WCET_i / P_i) <= M - (M - 1) · U_max
249	 where U_max = max{WCET_i / P_i}[10]. Notice that for U_max = 1,
250	 M - (M - 1) · U_max becomes M - M + 1 = 1 and this schedulability condition
251	 just confirms the Dhall's effect. A more complete survey of the literature
252	 about schedulability tests for multi-processor real-time scheduling can be
253	 found in [11].
254	
255	 As seen, enforcing that the total utilization is smaller than M does not
256	 guarantee that global EDF schedules the tasks without missing any deadline
257	 (in other words, global EDF is not an optimal scheduling algorithm). However,
258	 a total utilization smaller than M is enough to guarantee that non real-time
259	 tasks are not starved and that the tardiness of real-time tasks has an upper
260	 bound[12] (as previously noted). Different bounds on the maximum tardiness
261	 experienced by real-time tasks have been developed in various papers[13,14],
262	 but the theoretical result that is important for SCHED_DEADLINE is that if
263	 the total utilization is smaller or equal than M then the response times of
264	 the tasks are limited.
265	
266	3.4 Relationship with SCHED_DEADLINE Parameters
267	------------------------
268	
269	 Finally, it is important to understand the relationship between the
270	 SCHED_DEADLINE scheduling parameters described in Section 2 (runtime,
271	 deadline and period) and the real-time task parameters (WCET, D, P)
272	 described in this section. Note that the tasks' temporal constraints are
273	 represented by its absolute deadlines d_j = r_j + D described above, while
274	 SCHED_DEADLINE schedules the tasks according to scheduling deadlines (see
275	 Section 2).
276	 If an admission test is used to guarantee that the scheduling deadlines
277	 are respected, then SCHED_DEADLINE can be used to schedule real-time tasks
278	 guaranteeing that all the jobs' deadlines of a task are respected.
279	 In order to do this, a task must be scheduled by setting:
280	
281	  - runtime >= WCET
282	  - deadline = D
283	  - period <= P
284	
285	 IOW, if runtime >= WCET and if period is <= P, then the scheduling deadlines
286	 and the absolute deadlines (d_j) coincide, so a proper admission control
287	 allows to respect the jobs' absolute deadlines for this task (this is what is
288	 called "hard schedulability property" and is an extension of Lemma 1 of [2]).
289	 Notice that if runtime > deadline the admission control will surely reject
290	 this task, as it is not possible to respect its temporal constraints.
291	
292	 References:
293	  1 - C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogram-
294	      ming in a hard-real-time environment. Journal of the Association for
295	      Computing Machinery, 20(1), 1973.
296	  2 - L. Abeni , G. Buttazzo. Integrating Multimedia Applications in Hard
297	      Real-Time Systems. Proceedings of the 19th IEEE Real-time Systems
298	      Symposium, 1998. http://retis.sssup.it/~giorgio/paps/1998/rtss98-cbs.pdf
299	  3 - L. Abeni. Server Mechanisms for Multimedia Applications. ReTiS Lab
300	      Technical Report. http://disi.unitn.it/~abeni/tr-98-01.pdf
301	  4 - J. Y. Leung and M.L. Merril. A Note on Preemptive Scheduling of
302	      Periodic, Real-Time Tasks. Information Processing Letters, vol. 11,
303	      no. 3, pp. 115-118, 1980.
304	  5 - S. K. Baruah, A. K. Mok and L. E. Rosier. Preemptively Scheduling
305	      Hard-Real-Time Sporadic Tasks on One Processor. Proceedings of the
306	      11th IEEE Real-time Systems Symposium, 1990.
307	  6 - S. K. Baruah, L. E. Rosier and R. R. Howell. Algorithms and Complexity
308	      Concerning the Preemptive Scheduling of Periodic Real-Time tasks on
309	      One Processor. Real-Time Systems Journal, vol. 4, no. 2, pp 301-324,
310	      1990.
311	  7 - S. J. Dhall and C. L. Liu. On a real-time scheduling problem. Operations
312	      research, vol. 26, no. 1, pp 127-140, 1978.
313	  8 - T. Baker. Multiprocessor EDF and Deadline Monotonic Schedulability
314	      Analysis. Proceedings of the 24th IEEE Real-Time Systems Symposium, 2003.
315	  9 - T. Baker. An Analysis of EDF Schedulability on a Multiprocessor.
316	      IEEE Transactions on Parallel and Distributed Systems, vol. 16, no. 8,
317	      pp 760-768, 2005.
318	  10 - J. Goossens, S. Funk and S. Baruah, Priority-Driven Scheduling of
319	       Periodic Task Systems on Multiprocessors. Real-Time Systems Journal,
320	       vol. 25, no. 2–3, pp. 187–205, 2003.
321	  11 - R. Davis and A. Burns. A Survey of Hard Real-Time Scheduling for
322	       Multiprocessor Systems. ACM Computing Surveys, vol. 43, no. 4, 2011.
323	       http://www-users.cs.york.ac.uk/~robdavis/papers/MPSurveyv5.0.pdf
324	  12 - U. C. Devi and J. H. Anderson. Tardiness Bounds under Global EDF
325	       Scheduling on a Multiprocessor. Real-Time Systems Journal, vol. 32,
326	       no. 2, pp 133-189, 2008.
327	  13 - P. Valente and G. Lipari. An Upper Bound to the Lateness of Soft
328	       Real-Time Tasks Scheduled by EDF on Multiprocessors. Proceedings of
329	       the 26th IEEE Real-Time Systems Symposium, 2005.
330	  14 - J. Erickson, U. Devi and S. Baruah. Improved tardiness bounds for
331	       Global EDF. Proceedings of the 22nd Euromicro Conference on
332	       Real-Time Systems, 2010.
333	
334	
335	4. Bandwidth management
336	=======================
337	
338	 As previously mentioned, in order for -deadline scheduling to be
339	 effective and useful (that is, to be able to provide "runtime" time units
340	 within "deadline"), it is important to have some method to keep the allocation
341	 of the available fractions of CPU time to the various tasks under control.
342	 This is usually called "admission control" and if it is not performed, then
343	 no guarantee can be given on the actual scheduling of the -deadline tasks.
344	
345	 As already stated in Section 3, a necessary condition to be respected to
346	 correctly schedule a set of real-time tasks is that the total utilization
347	 is smaller than M. When talking about -deadline tasks, this requires that
348	 the sum of the ratio between runtime and period for all tasks is smaller
349	 than M. Notice that the ratio runtime/period is equivalent to the utilization
350	 of a "traditional" real-time task, and is also often referred to as
351	 "bandwidth".
352	 The interface used to control the CPU bandwidth that can be allocated
353	 to -deadline tasks is similar to the one already used for -rt
354	 tasks with real-time group scheduling (a.k.a. RT-throttling - see
355	 Documentation/scheduler/sched-rt-group.txt), and is based on readable/
356	 writable control files located in procfs (for system wide settings).
357	 Notice that per-group settings (controlled through cgroupfs) are still not
358	 defined for -deadline tasks, because more discussion is needed in order to
359	 figure out how we want to manage SCHED_DEADLINE bandwidth at the task group
360	 level.
361	
362	 A main difference between deadline bandwidth management and RT-throttling
363	 is that -deadline tasks have bandwidth on their own (while -rt ones don't!),
364	 and thus we don't need a higher level throttling mechanism to enforce the
365	 desired bandwidth. In other words, this means that interface parameters are
366	 only used at admission control time (i.e., when the user calls
367	 sched_setattr()). Scheduling is then performed considering actual tasks'
368	 parameters, so that CPU bandwidth is allocated to SCHED_DEADLINE tasks
369	 respecting their needs in terms of granularity. Therefore, using this simple
370	 interface we can put a cap on total utilization of -deadline tasks (i.e.,
371	 \Sum (runtime_i / period_i) < global_dl_utilization_cap).
372	
373	4.1 System wide settings
374	------------------------
375	
376	 The system wide settings are configured under the /proc virtual file system.
377	
378	 For now the -rt knobs are used for -deadline admission control and the
379	 -deadline runtime is accounted against the -rt runtime. We realize that this
380	 isn't entirely desirable; however, it is better to have a small interface for
381	 now, and be able to change it easily later. The ideal situation (see 5.) is to
382	 run -rt tasks from a -deadline server; in which case the -rt bandwidth is a
383	 direct subset of dl_bw.
384	
385	 This means that, for a root_domain comprising M CPUs, -deadline tasks
386	 can be created while the sum of their bandwidths stays below:
387	
388	   M * (sched_rt_runtime_us / sched_rt_period_us)
389	
390	 It is also possible to disable this bandwidth management logic, and
391	 be thus free of oversubscribing the system up to any arbitrary level.
392	 This is done by writing -1 in /proc/sys/kernel/sched_rt_runtime_us.
393	
394	
395	4.2 Task interface
396	------------------
397	
398	 Specifying a periodic/sporadic task that executes for a given amount of
399	 runtime at each instance, and that is scheduled according to the urgency of
400	 its own timing constraints needs, in general, a way of declaring:
401	  - a (maximum/typical) instance execution time,
402	  - a minimum interval between consecutive instances,
403	  - a time constraint by which each instance must be completed.
404	
405	 Therefore:
406	  * a new struct sched_attr, containing all the necessary fields is
407	    provided;
408	  * the new scheduling related syscalls that manipulate it, i.e.,
409	    sched_setattr() and sched_getattr() are implemented.
410	
411	
412	4.3 Default behavior
413	---------------------
414	
415	 The default value for SCHED_DEADLINE bandwidth is to have rt_runtime equal to
416	 950000. With rt_period equal to 1000000, by default, it means that -deadline
417	 tasks can use at most 95%, multiplied by the number of CPUs that compose the
418	 root_domain, for each root_domain.
419	 This means that non -deadline tasks will receive at least 5% of the CPU time,
420	 and that -deadline tasks will receive their runtime with a guaranteed
421	 worst-case delay respect to the "deadline" parameter. If "deadline" = "period"
422	 and the cpuset mechanism is used to implement partitioned scheduling (see
423	 Section 5), then this simple setting of the bandwidth management is able to
424	 deterministically guarantee that -deadline tasks will receive their runtime
425	 in a period.
426	
427	 Finally, notice that in order not to jeopardize the admission control a
428	 -deadline task cannot fork.
429	
430	
431	4.4 Behavior of sched_yield()
432	-----------------------------
433	
434	 When a SCHED_DEADLINE task calls sched_yield(), it gives up its
435	 remaining runtime and is immediately throttled, until the next
436	 period, when its runtime will be replenished (a special flag
437	 dl_yielded is set and used to handle correctly throttling and runtime
438	 replenishment after a call to sched_yield()).
439	
440	 This behavior of sched_yield() allows the task to wake-up exactly at
441	 the beginning of the next period. Also, this may be useful in the
442	 future with bandwidth reclaiming mechanisms, where sched_yield() will
443	 make the leftoever runtime available for reclamation by other
444	 SCHED_DEADLINE tasks.
445	
446	
447	5. Tasks CPU affinity
448	=====================
449	
450	 -deadline tasks cannot have an affinity mask smaller that the entire
451	 root_domain they are created on. However, affinities can be specified
452	 through the cpuset facility (Documentation/cgroup-v1/cpusets.txt).
453	
454	5.1 SCHED_DEADLINE and cpusets HOWTO
455	------------------------------------
456	
457	 An example of a simple configuration (pin a -deadline task to CPU0)
458	 follows (rt-app is used to create a -deadline task).
459	
460	 mkdir /dev/cpuset
461	 mount -t cgroup -o cpuset cpuset /dev/cpuset
462	 cd /dev/cpuset
463	 mkdir cpu0
464	 echo 0 > cpu0/cpuset.cpus
465	 echo 0 > cpu0/cpuset.mems
466	 echo 1 > cpuset.cpu_exclusive
467	 echo 0 > cpuset.sched_load_balance
468	 echo 1 > cpu0/cpuset.cpu_exclusive
469	 echo 1 > cpu0/cpuset.mem_exclusive
470	 echo $$ > cpu0/tasks
471	 rt-app -t 100000:10000:d:0 -D5 (it is now actually superfluous to specify
472	 task affinity)
473	
474	6. Future plans
475	===============
476	
477	 Still missing:
478	
479	  - refinements to deadline inheritance, especially regarding the possibility
480	    of retaining bandwidth isolation among non-interacting tasks. This is
481	    being studied from both theoretical and practical points of view, and
482	    hopefully we should be able to produce some demonstrative code soon;
483	  - (c)group based bandwidth management, and maybe scheduling;
484	  - access control for non-root users (and related security concerns to
485	    address), which is the best way to allow unprivileged use of the mechanisms
486	    and how to prevent non-root users "cheat" the system?
487	
488	 As already discussed, we are planning also to merge this work with the EDF
489	 throttling patches [https://lkml.org/lkml/2010/2/23/239] but we still are in
490	 the preliminary phases of the merge and we really seek feedback that would
491	 help us decide on the direction it should take.
492	
493	Appendix A. Test suite
494	======================
495	
496	 The SCHED_DEADLINE policy can be easily tested using two applications that
497	 are part of a wider Linux Scheduler validation suite. The suite is
498	 available as a GitHub repository: https://github.com/scheduler-tools.
499	
500	 The first testing application is called rt-app and can be used to
501	 start multiple threads with specific parameters. rt-app supports
502	 SCHED_{OTHER,FIFO,RR,DEADLINE} scheduling policies and their related
503	 parameters (e.g., niceness, priority, runtime/deadline/period). rt-app
504	 is a valuable tool, as it can be used to synthetically recreate certain
505	 workloads (maybe mimicking real use-cases) and evaluate how the scheduler
506	 behaves under such workloads. In this way, results are easily reproducible.
507	 rt-app is available at: https://github.com/scheduler-tools/rt-app.
508	
509	 Thread parameters can be specified from the command line, with something like
510	 this:
511	
512	  # rt-app -t 100000:10000:d -t 150000:20000:f:10 -D5
513	
514	 The above creates 2 threads. The first one, scheduled by SCHED_DEADLINE,
515	 executes for 10ms every 100ms. The second one, scheduled at SCHED_FIFO
516	 priority 10, executes for 20ms every 150ms. The test will run for a total
517	 of 5 seconds.
518	
519	 More interestingly, configurations can be described with a json file that
520	 can be passed as input to rt-app with something like this:
521	
522	  # rt-app my_config.json
523	
524	 The parameters that can be specified with the second method are a superset
525	 of the command line options. Please refer to rt-app documentation for more
526	 details (<rt-app-sources>/doc/*.json).
527	
528	 The second testing application is a modification of schedtool, called
529	 schedtool-dl, which can be used to setup SCHED_DEADLINE parameters for a
530	 certain pid/application. schedtool-dl is available at:
531	 https://github.com/scheduler-tools/schedtool-dl.git.
532	
533	 The usage is straightforward:
534	
535	  # schedtool -E -t 10000000:100000000 -e ./my_cpuhog_app
536	
537	 With this, my_cpuhog_app is put to run inside a SCHED_DEADLINE reservation
538	 of 10ms every 100ms (note that parameters are expressed in microseconds).
539	 You can also use schedtool to create a reservation for an already running
540	 application, given that you know its pid:
541	
542	  # schedtool -E -t 10000000:100000000 my_app_pid
543	
544	Appendix B. Minimal main()
545	==========================
546	
547	 We provide in what follows a simple (ugly) self-contained code snippet
548	 showing how SCHED_DEADLINE reservations can be created by a real-time
549	 application developer.
550	
551	 #define _GNU_SOURCE
552	 #include <unistd.h>
553	 #include <stdio.h>
554	 #include <stdlib.h>
555	 #include <string.h>
556	 #include <time.h>
557	 #include <linux/unistd.h>
558	 #include <linux/kernel.h>
559	 #include <linux/types.h>
560	 #include <sys/syscall.h>
561	 #include <pthread.h>
562	
563	 #define gettid() syscall(__NR_gettid)
564	
565	 #define SCHED_DEADLINE	6
566	
567	 /* XXX use the proper syscall numbers */
568	 #ifdef __x86_64__
569	 #define __NR_sched_setattr		314
570	 #define __NR_sched_getattr		315
571	 #endif
572	
573	 #ifdef __i386__
574	 #define __NR_sched_setattr		351
575	 #define __NR_sched_getattr		352
576	 #endif
577	
578	 #ifdef __arm__
579	 #define __NR_sched_setattr		380
580	 #define __NR_sched_getattr		381
581	 #endif
582	
583	 static volatile int done;
584	
585	 struct sched_attr {
586		__u32 size;
587	
588		__u32 sched_policy;
589		__u64 sched_flags;
590	
591		/* SCHED_NORMAL, SCHED_BATCH */
592		__s32 sched_nice;
593	
594		/* SCHED_FIFO, SCHED_RR */
595		__u32 sched_priority;
596	
597		/* SCHED_DEADLINE (nsec) */
598		__u64 sched_runtime;
599		__u64 sched_deadline;
600		__u64 sched_period;
601	 };
602	
603	 int sched_setattr(pid_t pid,
604			  const struct sched_attr *attr,
605			  unsigned int flags)
606	 {
607		return syscall(__NR_sched_setattr, pid, attr, flags);
608	 }
609	
610	 int sched_getattr(pid_t pid,
611			  struct sched_attr *attr,
612			  unsigned int size,
613			  unsigned int flags)
614	 {
615		return syscall(__NR_sched_getattr, pid, attr, size, flags);
616	 }
617	
618	 void *run_deadline(void *data)
619	 {
620		struct sched_attr attr;
621		int x = 0;
622		int ret;
623		unsigned int flags = 0;
624	
625		printf("deadline thread started [%ld]\n", gettid());
626	
627		attr.size = sizeof(attr);
628		attr.sched_flags = 0;
629		attr.sched_nice = 0;
630		attr.sched_priority = 0;
631	
632		/* This creates a 10ms/30ms reservation */
633		attr.sched_policy = SCHED_DEADLINE;
634		attr.sched_runtime = 10 * 1000 * 1000;
635		attr.sched_period = attr.sched_deadline = 30 * 1000 * 1000;
636	
637		ret = sched_setattr(0, &attr, flags);
638		if (ret < 0) {
639			done = 0;
640			perror("sched_setattr");
641			exit(-1);
642		}
643	
644		while (!done) {
645			x++;
646		}
647	
648		printf("deadline thread dies [%ld]\n", gettid());
649		return NULL;
650	 }
651	
652	 int main (int argc, char **argv)
653	 {
654		pthread_t thread;
655	
656		printf("main thread [%ld]\n", gettid());
657	
658		pthread_create(&thread, NULL, run_deadline, NULL);
659	
660		sleep(10);
661	
662		done = 1;
663		pthread_join(thread, NULL);
664	
665		printf("main dies [%ld]\n", gettid());
666		return 0;
667	 }
Hide Line Numbers
About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Information is copyright its respective author. All material is available from the Linux Kernel Source distributed under a GPL License. This page is provided as a free service by mjmwired.net.