About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Documentation / scheduler / sched-deadline.txt




Custom Search

Based on kernel version 4.0. Page generated on 2015-04-14 21:25 EST.

1				  Deadline Task Scheduling
2				  ------------------------
3	
4	CONTENTS
5	========
6	
7	 0. WARNING
8	 1. Overview
9	 2. Scheduling algorithm
10	 3. Scheduling Real-Time Tasks
11	 4. Bandwidth management
12	   4.1 System-wide settings
13	   4.2 Task interface
14	   4.3 Default behavior
15	 5. Tasks CPU affinity
16	   5.1 SCHED_DEADLINE and cpusets HOWTO
17	 6. Future plans
18	 A. Test suite
19	 B. Minimal main()
20	
21	
22	0. WARNING
23	==========
24	
25	 Fiddling with these settings can result in an unpredictable or even unstable
26	 system behavior. As for -rt (group) scheduling, it is assumed that root users
27	 know what they're doing.
28	
29	
30	1. Overview
31	===========
32	
33	 The SCHED_DEADLINE policy contained inside the sched_dl scheduling class is
34	 basically an implementation of the Earliest Deadline First (EDF) scheduling
35	 algorithm, augmented with a mechanism (called Constant Bandwidth Server, CBS)
36	 that makes it possible to isolate the behavior of tasks between each other.
37	
38	
39	2. Scheduling algorithm
40	==================
41	
42	 SCHED_DEADLINE uses three parameters, named "runtime", "period", and
43	 "deadline", to schedule tasks. A SCHED_DEADLINE task should receive
44	 "runtime" microseconds of execution time every "period" microseconds, and
45	 these "runtime" microseconds are available within "deadline" microseconds
46	 from the beginning of the period.  In order to implement this behaviour,
47	 every time the task wakes up, the scheduler computes a "scheduling deadline"
48	 consistent with the guarantee (using the CBS[2,3] algorithm). Tasks are then
49	 scheduled using EDF[1] on these scheduling deadlines (the task with the
50	 earliest scheduling deadline is selected for execution). Notice that the
51	 task actually receives "runtime" time units within "deadline" if a proper
52	 "admission control" strategy (see Section "4. Bandwidth management") is used
53	 (clearly, if the system is overloaded this guarantee cannot be respected).
54	
55	 Summing up, the CBS[2,3] algorithms assigns scheduling deadlines to tasks so
56	 that each task runs for at most its runtime every period, avoiding any
57	 interference between different tasks (bandwidth isolation), while the EDF[1]
58	 algorithm selects the task with the earliest scheduling deadline as the one
59	 to be executed next. Thanks to this feature, tasks that do not strictly comply
60	 with the "traditional" real-time task model (see Section 3) can effectively
61	 use the new policy.
62	
63	 In more details, the CBS algorithm assigns scheduling deadlines to
64	 tasks in the following way:
65	
66	  - Each SCHED_DEADLINE task is characterised by the "runtime",
67	    "deadline", and "period" parameters;
68	
69	  - The state of the task is described by a "scheduling deadline", and
70	    a "remaining runtime". These two parameters are initially set to 0;
71	
72	  - When a SCHED_DEADLINE task wakes up (becomes ready for execution),
73	    the scheduler checks if
74	
75	                 remaining runtime                  runtime
76	        ----------------------------------    >    ---------
77	        scheduling deadline - current time           period
78	
79	    then, if the scheduling deadline is smaller than the current time, or
80	    this condition is verified, the scheduling deadline and the
81	    remaining runtime are re-initialised as
82	
83	         scheduling deadline = current time + deadline
84	         remaining runtime = runtime
85	
86	    otherwise, the scheduling deadline and the remaining runtime are
87	    left unchanged;
88	
89	  - When a SCHED_DEADLINE task executes for an amount of time t, its
90	    remaining runtime is decreased as
91	
92	         remaining runtime = remaining runtime - t
93	
94	    (technically, the runtime is decreased at every tick, or when the
95	    task is descheduled / preempted);
96	
97	  - When the remaining runtime becomes less or equal than 0, the task is
98	    said to be "throttled" (also known as "depleted" in real-time literature)
99	    and cannot be scheduled until its scheduling deadline. The "replenishment
100	    time" for this task (see next item) is set to be equal to the current
101	    value of the scheduling deadline;
102	
103	  - When the current time is equal to the replenishment time of a
104	    throttled task, the scheduling deadline and the remaining runtime are
105	    updated as
106	
107	         scheduling deadline = scheduling deadline + period
108	         remaining runtime = remaining runtime + runtime
109	
110	
111	3. Scheduling Real-Time Tasks
112	=============================
113	
114	 * BIG FAT WARNING ******************************************************
115	 *
116	 * This section contains a (not-thorough) summary on classical deadline
117	 * scheduling theory, and how it applies to SCHED_DEADLINE.
118	 * The reader can "safely" skip to Section 4 if only interested in seeing
119	 * how the scheduling policy can be used. Anyway, we strongly recommend
120	 * to come back here and continue reading (once the urge for testing is
121	 * satisfied :P) to be sure of fully understanding all technical details.
122	 ************************************************************************
123	
124	 There are no limitations on what kind of task can exploit this new
125	 scheduling discipline, even if it must be said that it is particularly
126	 suited for periodic or sporadic real-time tasks that need guarantees on their
127	 timing behavior, e.g., multimedia, streaming, control applications, etc.
128	
129	 A typical real-time task is composed of a repetition of computation phases
130	 (task instances, or jobs) which are activated on a periodic or sporadic
131	 fashion.
132	 Each job J_j (where J_j is the j^th job of the task) is characterised by an
133	 arrival time r_j (the time when the job starts), an amount of computation
134	 time c_j needed to finish the job, and a job absolute deadline d_j, which
135	 is the time within which the job should be finished. The maximum execution
136	 time max_j{c_j} is called "Worst Case Execution Time" (WCET) for the task.
137	 A real-time task can be periodic with period P if r_{j+1} = r_j + P, or
138	 sporadic with minimum inter-arrival time P is r_{j+1} >= r_j + P. Finally,
139	 d_j = r_j + D, where D is the task's relative deadline.
140	 The utilisation of a real-time task is defined as the ratio between its
141	 WCET and its period (or minimum inter-arrival time), and represents
142	 the fraction of CPU time needed to execute the task.
143	
144	 If the total utilisation sum_i(WCET_i/P_i) is larger than M (with M equal
145	 to the number of CPUs), then the scheduler is unable to respect all the
146	 deadlines.
147	 Note that total utilisation is defined as the sum of the utilisations
148	 WCET_i/P_i over all the real-time tasks in the system. When considering
149	 multiple real-time tasks, the parameters of the i-th task are indicated
150	 with the "_i" suffix.
151	 Moreover, if the total utilisation is larger than M, then we risk starving
152	 non- real-time tasks by real-time tasks.
153	 If, instead, the total utilisation is smaller than M, then non real-time
154	 tasks will not be starved and the system might be able to respect all the
155	 deadlines.
156	 As a matter of fact, in this case it is possible to provide an upper bound
157	 for tardiness (defined as the maximum between 0 and the difference
158	 between the finishing time of a job and its absolute deadline).
159	 More precisely, it can be proven that using a global EDF scheduler the
160	 maximum tardiness of each task is smaller or equal than
161		((M − 1) · WCET_max − WCET_min)/(M − (M − 2) · U_max) + WCET_max
162	 where WCET_max = max_i{WCET_i} is the maximum WCET, WCET_min=min_i{WCET_i}
163	 is the minimum WCET, and U_max = max_i{WCET_i/P_i} is the maximum utilisation.
164	
165	 If M=1 (uniprocessor system), or in case of partitioned scheduling (each
166	 real-time task is statically assigned to one and only one CPU), it is
167	 possible to formally check if all the deadlines are respected.
168	 If D_i = P_i for all tasks, then EDF is able to respect all the deadlines
169	 of all the tasks executing on a CPU if and only if the total utilisation
170	 of the tasks running on such a CPU is smaller or equal than 1.
171	 If D_i != P_i for some task, then it is possible to define the density of
172	 a task as C_i/min{D_i,T_i}, and EDF is able to respect all the deadlines
173	 of all the tasks running on a CPU if the sum sum_i C_i/min{D_i,T_i} of the
174	 densities of the tasks running on such a CPU is smaller or equal than 1
175	 (notice that this condition is only sufficient, and not necessary).
176	
177	 On multiprocessor systems with global EDF scheduling (non partitioned
178	 systems), a sufficient test for schedulability can not be based on the
179	 utilisations (it can be shown that task sets with utilisations slightly
180	 larger than 1 can miss deadlines regardless of the number of CPUs M).
181	 However, as previously stated, enforcing that the total utilisation is smaller
182	 than M is enough to guarantee that non real-time tasks are not starved and
183	 that the tardiness of real-time tasks has an upper bound.
184	
185	 SCHED_DEADLINE can be used to schedule real-time tasks guaranteeing that
186	 the jobs' deadlines of a task are respected. In order to do this, a task
187	 must be scheduled by setting:
188	
189	  - runtime >= WCET
190	  - deadline = D
191	  - period <= P
192	
193	 IOW, if runtime >= WCET and if period is >= P, then the scheduling deadlines
194	 and the absolute deadlines (d_j) coincide, so a proper admission control
195	 allows to respect the jobs' absolute deadlines for this task (this is what is
196	 called "hard schedulability property" and is an extension of Lemma 1 of [2]).
197	 Notice that if runtime > deadline the admission control will surely reject
198	 this task, as it is not possible to respect its temporal constraints.
199	
200	 References:
201	  1 - C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogram-
202	      ming in a hard-real-time environment. Journal of the Association for
203	      Computing Machinery, 20(1), 1973.
204	  2 - L. Abeni , G. Buttazzo. Integrating Multimedia Applications in Hard
205	      Real-Time Systems. Proceedings of the 19th IEEE Real-time Systems
206	      Symposium, 1998. http://retis.sssup.it/~giorgio/paps/1998/rtss98-cbs.pdf
207	  3 - L. Abeni. Server Mechanisms for Multimedia Applications. ReTiS Lab
208	      Technical Report. http://disi.unitn.it/~abeni/tr-98-01.pdf
209	
210	4. Bandwidth management
211	=======================
212	
213	 As previously mentioned, in order for -deadline scheduling to be
214	 effective and useful (that is, to be able to provide "runtime" time units
215	 within "deadline"), it is important to have some method to keep the allocation
216	 of the available fractions of CPU time to the various tasks under control.
217	 This is usually called "admission control" and if it is not performed, then
218	 no guarantee can be given on the actual scheduling of the -deadline tasks.
219	
220	 As already stated in Section 3, a necessary condition to be respected to
221	 correctly schedule a set of real-time tasks is that the total utilisation
222	 is smaller than M. When talking about -deadline tasks, this requires that
223	 the sum of the ratio between runtime and period for all tasks is smaller
224	 than M. Notice that the ratio runtime/period is equivalent to the utilisation
225	 of a "traditional" real-time task, and is also often referred to as
226	 "bandwidth".
227	 The interface used to control the CPU bandwidth that can be allocated
228	 to -deadline tasks is similar to the one already used for -rt
229	 tasks with real-time group scheduling (a.k.a. RT-throttling - see
230	 Documentation/scheduler/sched-rt-group.txt), and is based on readable/
231	 writable control files located in procfs (for system wide settings).
232	 Notice that per-group settings (controlled through cgroupfs) are still not
233	 defined for -deadline tasks, because more discussion is needed in order to
234	 figure out how we want to manage SCHED_DEADLINE bandwidth at the task group
235	 level.
236	
237	 A main difference between deadline bandwidth management and RT-throttling
238	 is that -deadline tasks have bandwidth on their own (while -rt ones don't!),
239	 and thus we don't need a higher level throttling mechanism to enforce the
240	 desired bandwidth. In other words, this means that interface parameters are
241	 only used at admission control time (i.e., when the user calls
242	 sched_setattr()). Scheduling is then performed considering actual tasks'
243	 parameters, so that CPU bandwidth is allocated to SCHED_DEADLINE tasks
244	 respecting their needs in terms of granularity. Therefore, using this simple
245	 interface we can put a cap on total utilization of -deadline tasks (i.e.,
246	 \Sum (runtime_i / period_i) < global_dl_utilization_cap).
247	
248	4.1 System wide settings
249	------------------------
250	
251	 The system wide settings are configured under the /proc virtual file system.
252	
253	 For now the -rt knobs are used for -deadline admission control and the
254	 -deadline runtime is accounted against the -rt runtime. We realise that this
255	 isn't entirely desirable; however, it is better to have a small interface for
256	 now, and be able to change it easily later. The ideal situation (see 5.) is to
257	 run -rt tasks from a -deadline server; in which case the -rt bandwidth is a
258	 direct subset of dl_bw.
259	
260	 This means that, for a root_domain comprising M CPUs, -deadline tasks
261	 can be created while the sum of their bandwidths stays below:
262	
263	   M * (sched_rt_runtime_us / sched_rt_period_us)
264	
265	 It is also possible to disable this bandwidth management logic, and
266	 be thus free of oversubscribing the system up to any arbitrary level.
267	 This is done by writing -1 in /proc/sys/kernel/sched_rt_runtime_us.
268	
269	
270	4.2 Task interface
271	------------------
272	
273	 Specifying a periodic/sporadic task that executes for a given amount of
274	 runtime at each instance, and that is scheduled according to the urgency of
275	 its own timing constraints needs, in general, a way of declaring:
276	  - a (maximum/typical) instance execution time,
277	  - a minimum interval between consecutive instances,
278	  - a time constraint by which each instance must be completed.
279	
280	 Therefore:
281	  * a new struct sched_attr, containing all the necessary fields is
282	    provided;
283	  * the new scheduling related syscalls that manipulate it, i.e.,
284	    sched_setattr() and sched_getattr() are implemented.
285	
286	
287	4.3 Default behavior
288	---------------------
289	
290	 The default value for SCHED_DEADLINE bandwidth is to have rt_runtime equal to
291	 950000. With rt_period equal to 1000000, by default, it means that -deadline
292	 tasks can use at most 95%, multiplied by the number of CPUs that compose the
293	 root_domain, for each root_domain.
294	 This means that non -deadline tasks will receive at least 5% of the CPU time,
295	 and that -deadline tasks will receive their runtime with a guaranteed
296	 worst-case delay respect to the "deadline" parameter. If "deadline" = "period"
297	 and the cpuset mechanism is used to implement partitioned scheduling (see
298	 Section 5), then this simple setting of the bandwidth management is able to
299	 deterministically guarantee that -deadline tasks will receive their runtime
300	 in a period.
301	
302	 Finally, notice that in order not to jeopardize the admission control a
303	 -deadline task cannot fork.
304	
305	5. Tasks CPU affinity
306	=====================
307	
308	 -deadline tasks cannot have an affinity mask smaller that the entire
309	 root_domain they are created on. However, affinities can be specified
310	 through the cpuset facility (Documentation/cgroups/cpusets.txt).
311	
312	5.1 SCHED_DEADLINE and cpusets HOWTO
313	------------------------------------
314	
315	 An example of a simple configuration (pin a -deadline task to CPU0)
316	 follows (rt-app is used to create a -deadline task).
317	
318	 mkdir /dev/cpuset
319	 mount -t cgroup -o cpuset cpuset /dev/cpuset
320	 cd /dev/cpuset
321	 mkdir cpu0
322	 echo 0 > cpu0/cpuset.cpus
323	 echo 0 > cpu0/cpuset.mems
324	 echo 1 > cpuset.cpu_exclusive
325	 echo 0 > cpuset.sched_load_balance
326	 echo 1 > cpu0/cpuset.cpu_exclusive
327	 echo 1 > cpu0/cpuset.mem_exclusive
328	 echo $$ > cpu0/tasks
329	 rt-app -t 100000:10000:d:0 -D5 (it is now actually superfluous to specify
330	 task affinity)
331	
332	6. Future plans
333	===============
334	
335	 Still missing:
336	
337	  - refinements to deadline inheritance, especially regarding the possibility
338	    of retaining bandwidth isolation among non-interacting tasks. This is
339	    being studied from both theoretical and practical points of view, and
340	    hopefully we should be able to produce some demonstrative code soon;
341	  - (c)group based bandwidth management, and maybe scheduling;
342	  - access control for non-root users (and related security concerns to
343	    address), which is the best way to allow unprivileged use of the mechanisms
344	    and how to prevent non-root users "cheat" the system?
345	
346	 As already discussed, we are planning also to merge this work with the EDF
347	 throttling patches [https://lkml.org/lkml/2010/2/23/239] but we still are in
348	 the preliminary phases of the merge and we really seek feedback that would
349	 help us decide on the direction it should take.
350	
351	Appendix A. Test suite
352	======================
353	
354	 The SCHED_DEADLINE policy can be easily tested using two applications that
355	 are part of a wider Linux Scheduler validation suite. The suite is
356	 available as a GitHub repository: https://github.com/scheduler-tools.
357	
358	 The first testing application is called rt-app and can be used to
359	 start multiple threads with specific parameters. rt-app supports
360	 SCHED_{OTHER,FIFO,RR,DEADLINE} scheduling policies and their related
361	 parameters (e.g., niceness, priority, runtime/deadline/period). rt-app
362	 is a valuable tool, as it can be used to synthetically recreate certain
363	 workloads (maybe mimicking real use-cases) and evaluate how the scheduler
364	 behaves under such workloads. In this way, results are easily reproducible.
365	 rt-app is available at: https://github.com/scheduler-tools/rt-app.
366	
367	 Thread parameters can be specified from the command line, with something like
368	 this:
369	
370	  # rt-app -t 100000:10000:d -t 150000:20000:f:10 -D5
371	
372	 The above creates 2 threads. The first one, scheduled by SCHED_DEADLINE,
373	 executes for 10ms every 100ms. The second one, scheduled at SCHED_FIFO
374	 priority 10, executes for 20ms every 150ms. The test will run for a total
375	 of 5 seconds.
376	
377	 More interestingly, configurations can be described with a json file that
378	 can be passed as input to rt-app with something like this:
379	
380	  # rt-app my_config.json
381	
382	 The parameters that can be specified with the second method are a superset
383	 of the command line options. Please refer to rt-app documentation for more
384	 details (<rt-app-sources>/doc/*.json).
385	
386	 The second testing application is a modification of schedtool, called
387	 schedtool-dl, which can be used to setup SCHED_DEADLINE parameters for a
388	 certain pid/application. schedtool-dl is available at:
389	 https://github.com/scheduler-tools/schedtool-dl.git.
390	
391	 The usage is straightforward:
392	
393	  # schedtool -E -t 10000000:100000000 -e ./my_cpuhog_app
394	
395	 With this, my_cpuhog_app is put to run inside a SCHED_DEADLINE reservation
396	 of 10ms every 100ms (note that parameters are expressed in microseconds).
397	 You can also use schedtool to create a reservation for an already running
398	 application, given that you know its pid:
399	
400	  # schedtool -E -t 10000000:100000000 my_app_pid
401	
402	Appendix B. Minimal main()
403	==========================
404	
405	 We provide in what follows a simple (ugly) self-contained code snippet
406	 showing how SCHED_DEADLINE reservations can be created by a real-time
407	 application developer.
408	
409	 #define _GNU_SOURCE
410	 #include <unistd.h>
411	 #include <stdio.h>
412	 #include <stdlib.h>
413	 #include <string.h>
414	 #include <time.h>
415	 #include <linux/unistd.h>
416	 #include <linux/kernel.h>
417	 #include <linux/types.h>
418	 #include <sys/syscall.h>
419	 #include <pthread.h>
420	
421	 #define gettid() syscall(__NR_gettid)
422	
423	 #define SCHED_DEADLINE	6
424	
425	 /* XXX use the proper syscall numbers */
426	 #ifdef __x86_64__
427	 #define __NR_sched_setattr		314
428	 #define __NR_sched_getattr		315
429	 #endif
430	
431	 #ifdef __i386__
432	 #define __NR_sched_setattr		351
433	 #define __NR_sched_getattr		352
434	 #endif
435	
436	 #ifdef __arm__
437	 #define __NR_sched_setattr		380
438	 #define __NR_sched_getattr		381
439	 #endif
440	
441	 static volatile int done;
442	
443	 struct sched_attr {
444		__u32 size;
445	
446		__u32 sched_policy;
447		__u64 sched_flags;
448	
449		/* SCHED_NORMAL, SCHED_BATCH */
450		__s32 sched_nice;
451	
452		/* SCHED_FIFO, SCHED_RR */
453		__u32 sched_priority;
454	
455		/* SCHED_DEADLINE (nsec) */
456		__u64 sched_runtime;
457		__u64 sched_deadline;
458		__u64 sched_period;
459	 };
460	
461	 int sched_setattr(pid_t pid,
462			  const struct sched_attr *attr,
463			  unsigned int flags)
464	 {
465		return syscall(__NR_sched_setattr, pid, attr, flags);
466	 }
467	
468	 int sched_getattr(pid_t pid,
469			  struct sched_attr *attr,
470			  unsigned int size,
471			  unsigned int flags)
472	 {
473		return syscall(__NR_sched_getattr, pid, attr, size, flags);
474	 }
475	
476	 void *run_deadline(void *data)
477	 {
478		struct sched_attr attr;
479		int x = 0;
480		int ret;
481		unsigned int flags = 0;
482	
483		printf("deadline thread started [%ld]\n", gettid());
484	
485		attr.size = sizeof(attr);
486		attr.sched_flags = 0;
487		attr.sched_nice = 0;
488		attr.sched_priority = 0;
489	
490		/* This creates a 10ms/30ms reservation */
491		attr.sched_policy = SCHED_DEADLINE;
492		attr.sched_runtime = 10 * 1000 * 1000;
493		attr.sched_period = attr.sched_deadline = 30 * 1000 * 1000;
494	
495		ret = sched_setattr(0, &attr, flags);
496		if (ret < 0) {
497			done = 0;
498			perror("sched_setattr");
499			exit(-1);
500		}
501	
502		while (!done) {
503			x++;
504		}
505	
506		printf("deadline thread dies [%ld]\n", gettid());
507		return NULL;
508	 }
509	
510	 int main (int argc, char **argv)
511	 {
512		pthread_t thread;
513	
514		printf("main thread [%ld]\n", gettid());
515	
516		pthread_create(&thread, NULL, run_deadline, NULL);
517	
518		sleep(10);
519	
520		done = 1;
521		pthread_join(thread, NULL);
522	
523		printf("main dies [%ld]\n", gettid());
524		return 0;
525	 }
Hide Line Numbers
About Kernel Documentation Linux Kernel Contact Linux Resources Linux Blog

Information is copyright its respective author. All material is available from the Linux Kernel Source distributed under a GPL License. This page is provided as a free service by mjmwired.net.