Starting from:
$30

$24

Design Document Solution

Goal
The goal of this assignment is to modify the kernel scheduler so that it implements a Lottery system. Processes within the kernel should be chosen at “random” based on ticket quantity, not by queue/priority.





Assumptions
We assume that the code for the current scheduler and its dependent files exist and are available for us to modify. We are also implementing this scheduler on the timeshare processes of the kernel. This program only works on the FreeBSD version set up for this class.





Design
The lottery scheduler chooses threads to execute based on a randomly generated ticket value. Each thread in the queue has a nice value and the number of tickets that thread owns is calculated based on that value. The scheduler iterates through the queue, adding up the tickets until that value exceeds the randomly generated ticket value. At that point, the thread the ticket counter is on is chosen to be executed and removed from the queue.

Tickets

We define the number of tickets for a thread to be 21 – nice_value for each thread, accessed as “td-td_proc-p_nice”. This keeps the normal kernel convention of a higher nice value equating to a lower priority, or less runtime on the CPU.





Queues

Rather than deal with the mess of creating and maintaining our own queue, we decided to reuse theirs. After careful inspection, we found that the local variable “pri” in tdq_runq_add() determines which runq a process is added to (nice descriptive name there, FreeBSD -.-). So we assign that to 0, and all timeshare threads will be automatically placed in the very first queue. And since this index is stored in the thread, this means we didn’t need to modify any removal code either: it works exactly as it should, no further changes needed. Perfect!

Scheduling

So all that leaves is the actual scheduling itself. Fortunately we discovered that the function runq_choose_from() is called for timeshare threads only. This allowed us to completely rewrite it using the lottery algorithm described above, without worrying about possible side-effects in other queues. Because no data is stored, we’re required to iterate through the queue to count the total tickets before we can pick a winner.

To generating the random number we used the random() function in one of the kernel headers. While the standard libraries aren’t available, FreeBSD has gone ahead and implemented their own versions of many of the functions, and this was one of them. (memset and memcpy are in there, too.) After we have our number and our ticket total, we iterate through the queue once more until we find our winning process, and return it.

And once the process is returned, we’re done. Executing it and removing it from the queues are handled perfectly by the systems as they already are.





Our implementation is not efficient, but it is surgical: we changed as little code as possible, reducing our odds of crashing the system, and we started only after a thorough investigation of all functions involved. All in all, it worked out quite well.





Benchmarks




To prove that processes with more tickets are scheduled more often, we spin up 5 different processes, each with different nice values, and time them doing the same amount of work. We call the Ackermann function (with parameters 3, 5) roughly 15000 times, in five processes, and output the time it takes each processes to complete. Sample results below:

Process 4 completed: (pid 5116). Tickets: 20. Time elapsed: 12 seconds

Process 3 completed: (pid 5115). Tickets: 15. Time elapsed: 15 seconds

Process 2 completed: (pid 5114). Tickets: 10. Time elapsed: 17 seconds

Process 1 completed: (pid 5113). Tickets: 5. Time elapsed: 20 seconds

Process 0 completed: (pid 5112). Tickets: 1. Time elapsed: 24 seconds





But this is only one kind of benchmark, one with constant workload so we can examine the time. But to prove that the CPU time of each process is expected, we take a sample of the kernel log while this benchmark is running, and count the number of times each process is scheduled (aka, removed from the queue.) Then we can calculate how many times we expect it to be removed from the queue, and compare the two. These are shown below:

Process 0 (PID: 5112, tickets: 1): scheduled 7/400. Expected results: 7/400

Process 1 (PID: 5113, tickets: 5): scheduled 43/400. Expected results: 39/400

Process 2 (PID: 5114, tickets: 10): scheduled 76/400. Expected results: 78/400

Process 3 (PID: 5115, tickets: 15): scheduled 119/400. Expected results: 117/400

Process 4 (PID: 5116, tickets: 20): scheduled 155/400. Expected results: 156/400





We’ve seen the numbers vary from the expected results by as much as 10-15, but overall they’re quite similar, which indicates that our lottery queue is working as intended.





Modified Functions








In sched_ule.c

tdq_runq_add()

Two lines were added to this function: one assigns the variable “pri” to 0 to ensure that all timeshare threads are inserted in the FIRST run queue. The second line is just a call to a logging function – it outputs the scheduler info to the log file.





Tdq_runq_rem()

Only added a one logging function here to output threads removed from the queue. After all, it already removes threads correctly. Why mess with a good system?





In kern_switch.c

Only one function was changed, and the pseudocode for it is below. This is where almost the entire scheduler is handled. By line count, this is an easy project; it’s finding where to make changes that’s hard.

struct thread * runq_choose_from(struct runq *rq, u_char idx)

{

Create a pointer to the run queue head

Create a variable for the priority





If the priority found from the index of the run queue is not -1

Set the run queue head to be the index of that priority

Create a variable for the total number of tickets (initialize to 0) and the ticket winner

Create a variable for the queue size, smallest ticket count, and largest ticket count (initialize all to 0)

Create a variable for the thread iterator

Iterate through the tail queue calculating the ticket count based on the nice value of the thread, increment the total number of tickets by the ticket count. Increment the queue size and check the ticket count with the with the largest and smallest ticket counter variables.





Initialize the winner ticket value to be some value from 1 to the total number of tickets





Set the ticket counter variable to 0





Iterate through the tail queue and calculate the ticket count. Increment the ticket counter by that count and when the ticket counter is greater than or equal to the ticket winner, return the thread iterator





Return null.

Return null

}

More products