Tuesday, January 8, 2008

An approach to scheduling policies handling and implementation in FreeRTOS

Introduction

The FreeRTOS scheduler is something slightly different from the common concept of "scheduler" people are used to when talking about Operating Systems.
FreeRTOS does indeed provide a scheduling mechanism, not a scheduling policy (C'mon... Round-robin running fixed-priority tasks can't be truly considered a scheduling policy... Can't it?).

1. Possible solutions

Thus the FreeRTOS system lacks a convenient scheduling policy and a convenient way to effortlessly provide it to task developers.
Essentially there are three possible solutions:
  • Hard-coding policies within the FreeRTOS scheduling mechanism
  • Collaborative scheduling
  • Enhanced collaborative scheduling

1.1 Hard-coding policies within the FreeRTOS scheduling mechanism

It essentially means modifying the code which manages context-switches by implementing into this very code the desired scheduling policy.
It is a very comfortable solution, simple and effective, but violates a good software design principle: a neat separation should be kept between mechanisms and policies. Finally, it implies modifying critical portions of the system, potentially introducing very annoying bugs.
These considerations probably make it the worst possible solution among the ones proposed here.

1.2 Collaborative scheduling

Relying on collaborative scheduling (that is: let tasks self-administrate themselves) is another possible choice: tasks share all the same priority and mutually leave the CPU to other ones by calling "taskYELD()". This way tasks are scheduled in a round-robin fashion and each one of them runs for an amount of time decided by the running task itself. This one is kind of an elegant and simple solution, but it lacks in flexibility and, above all, in a real-time environment preemptive scheduling is close to be an unavoidable choice. Least but not last, it often implies each task being aware of the other tasks needs, "intentions" and timings: but such a behavior breaks another very desirable principle: concurrent processes must be kept "isolated".

1.3 Enhanced collaborative scheduling

Probably, the best possible solution is to find a way to build a "Scheduler" task upon the primitives already provided by FreeRTOS, while relying on the preemptive scheduling mechanism it already provides. This approach is the most attractive and elegant one, but it's not simple to think of a way to implement it.

Let's start looking at what primitives and mechanisms FreeRTOS provides.
FreeRTOS has a built-in mechanism for controlling tasks (i.e. them can be suspended, delayed and resumed from an Interrupt Service Request or from an event) and it also provides primitives such as "SetPriority()" and "GetPriority()" for managing the priority of a task.
The fact is we get caught in a sort of deadlock when using these last ones to achieve our objective, as shown from the following example.

Let we have three tasks, aTask, myTask and scheduler. The latter is obviously the scheduler of the system (and it's likely to schedule the other two tasks).

A solution would be to set the highest priority (say "4") for the scheduler and a lower priority for the other tasks (let it be "2"); but by doing so we prevent aTask and myTask by running at all because the scheduling mechanism of FreeRTOS at the moment of the context-switch always chooses the highest priority task.

Well: let's try to play with SetPriority() and GetPriority(): the first time the scheduler gets executed it holds the highest priority; to permit e.g. aTask to run it downgrades its own priority (of the scheduler) and the one of myTask.
But this way, unless aTask calls SetPriority() to boost the scheduler priority before a context-switch occurs, the scheduler and myTask won't have any chance to be executed anymore because of their lower priority.

So a real solution would be to use, instead of the pair Set/GetPriority(), another primitive: taskDELAY() (or, even better, taskDELAY_UNTIL()).
By doing so the algorithm changes as follows: the highest priority task is our scheduler again. When it gets executed it enforces a certain policy (e.g. EDF) by adjusting tasks priorities. Once it has finished its job it just delays its execution of a certain amount of time by calling taskDELAY_UNTIL().
Being the highest priority task, the scheduler it's scheduled immediately once it finishes its delay period (for the sake of precision the scheduler must also wait until a context-switch happens and, because of that, things should be arranged to avoid losing the predictability of the system that assures the hard real-timeness).
During the time the scheduler is sleeping the other, lower-priority, tasks gets executed according to their priority.

Obviously, by using this approach an important effort must be spent in making the amount of processing time used by the scheduler shorter as possible, too.

Although this approach can be considered a particular kind of collaborative scheduling it leaves tasks completely unaware of the policy being applied itself and, best of all, a task isn't required to know nothing nor about other tasks nor about the scheduler (which is, in fact, a task itself).
From a certain point of view, the most attractive feature of this algorithm is the tasks are required to embed no code at all to support it.

Notice how no particular constraints are posed on the policy being chosen, permitting a high degree of freedom to implement in the scheduler whatever we need, without of the tasks becoming aware of that.
This also means this algorithm provides a great flexibility, as explained in the following chapter.

2. Advanced aspects

Three advanced behaviors could be added to our scheme:
  • Dynamic scheduler delay period estimation
  • Multiple-levels schedulers
  • Multiple-policies schedulers

2.1 Dynamic scheduler delay period estimation

With a little effort a powerful feature can be provided to our algorithm: every time the scheduler gets executed it could collect some informations about the tasks and how they are behaving and it may consequently decide different lengths for its delay period to last, resulting in a greatly flexible policy which can try to reserve the highest possible CPU time to the tasks without loosing the advantages of our algorithm.

2.2 Multiple-levels schedulers

Another way of taking advantage of the flexibility of the algorithm is to create a system of multiple schedulers, each one of them acting at a different priority level and managing a group of tasks plus the scheduler of the lower priority level, in a pyramidal partitioning of the tasks.
By performing this partitioning the system could manage e.g. two different groups of tasks, each one of them performing different kind of jobs: one of them could be dedicated to Digital Signal Processing (DSP) and the other could manage all the activities involved in a cell-phone call (which includes DSP operations).

2.3 Multiple-policies schedulers

Multiple-levels schedulers perform a vertical partitioning1 of the scheduling operations, but a "horizontal" partitioning could be adopted as well.

It's the case of a system where more than one scheduler exist, each one of them enforcing a different policy semi-independently from the other schedulers.

Semi-independence is due to the fact that schedulers should avoid as much as possible to reciprocally interfere.

This scheme it's suitable for being used e.g. in a multi-CPU environment when task migration among the CPUs can occur; in such a situation two schedulers can be realised: one of them will manage tasks in a local context and the other one will interact with the other systems of the environment for establishing which task should be migrated and when.


NOTES:
  1. 1 The statement "multiple-levels schedulers perform a vertical partitioning" isn't completely true. In paragraph 2.2 DSP is not "under" other general activities that are performed during a cell-phone call, but is instead part of them. So partitioning should be considered both horizontal and vertical, not only vertical.

No comments:

Post a Comment