Helpful ReplyTNKernel

Page: < 123 Showing page 3 of 3
Author
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/08/28 02:39:44 (permalink)
0
andersm, a suggestion for the little optimization: now in your port, tn_start_exe restores context of tn_curr_run_task:
tn_start_exe:

    li $t0, 1 /* 1 -> TN_SYS_STATE_RUNNING */
    la $t1, tn_system_state
    sb $t0, 0($t1)

    lw $k1, tn_curr_run_task /* = tn_next_task_to_run */
    j tn_restore_context
    lw $sp, 0($k1) /* load new task SP */

    .end tn_start_exe

but actually it's better to allow for tn_next_task_to_run instead:
tn_start_exe:

    li $t0, 1 /* 1 -> TN_SYS_STATE_RUNNING */
    la $t1, tn_system_state
    sb $t0, 0($t1)

    la $t1, tn_next_task_to_run
    lw $t1, 0($t1)
    lw $sp, 0($t1) /* get new task's sp */
    la $t0, tn_curr_run_task
    j tn_restore_context
    sw $t1, 0($t0) /* tn_curr_run_task = tn_next_task_to_run */

    .end tn_start_exe

That's because tn_start_system() sets tn_curr_run_task to idle task, so, after call to tn_start_exe, idle task becomes running, waiting for first timer interrupt in which tn_tick_int_processing() is called, and only then context is switched to timer task.
In my opinion it's better if tn_start_exe immediately restores context of the high-priority task (i.e. timer task).
I faced this issue when I wanted to do hardware setup (including timer interrupt setup) in timer task context (in the tn_app_init callback), so I failed to do this because timer task never got executed. After my fix it works.
post edited by dimonomid - 2014/08/28 02:46:32
#41
andersm
Super Member
  • Total Posts : 2564
  • Reward points : 0
  • Joined: 2012/10/07 14:57:44
  • Location: 0
  • Status: online
Re: TNKernel 2014/08/28 11:40:20 (permalink)
0
Both the current and next run tasks are initialized to the idle task. You should set up the timer interrupt before calling tn_start_system.
#42
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/08/28 11:59:44 (permalink)
0
andersmBoth the current and next run tasks are initialized to the idle task.

No. From tn_start_system:
   tn_next_task_to_run = &tn_idle_task; //-- now, tn_next_task_to_run points at the idle task

   task_to_runnable(&tn_idle_task);//-- tn_next_task_to_run still points at the idle task
   task_to_runnable(&tn_timer_task);//-- now, tn_next_task_to_run points at the timer task, because its priority is higher than idle task

See the code of task_to_runnable for details.
andersmYou should set up the timer interrupt before calling tn_start_system.
I can, but I shouldn't. What if I want to call some routines from task context, not from the context of main()?
 
After I performed modifications I mentioned, everything works: peripherals are initialized from the application callback.
Anyway, what good if we have useless idle system tick, as it is now? Isn't it better to get down to business (i.e. to timer task) right away?
post edited by dimonomid - 2014/08/28 12:01:22
#43
andersm
Super Member
  • Total Posts : 2564
  • Reward points : 0
  • Joined: 2012/10/07 14:57:44
  • Location: 0
  • Status: online
Re: TNKernel 2014/08/28 14:00:18 (permalink)
0
I commited the change.
#44
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/08/30 05:02:23 (permalink)
0
Do you have any idea why does TNKernel need for tn_timer_task at all? Yeah, it manages tn_wait_timeout_list, i.e. it wakes up tasks whose timeout is expired. But why do we ever need a special task for it, why not do it right in the tn_tick_int_processing() that is called from timer ISR? The presence of the special task provides significant overhead.
Look what happens when timer interrupt is fired (assume we use tn_soft_isr for that, which is almost always the case):
  • Current context (23 words) is saved to the interrupt stack;
  • ISR called: particularly, tn_tick_int_processing() is called;
  • tn_tick_int_processing() disables interrupts, manages round-robin (if needed), then it wakes up tn_timer_task and sets tn_next_task_to_run, and enables interrupts back;
  • tn_tick_int_processing() finishes, so ISR macro checks that tn_next_task_to_run is different from tn_curr_run_task, and sets CS0 interrupt bit, so that context should be switched asap;
  • Context (23 words) gets restored to whatever task we interrupted;
  • CS0 ISR is immediately called, so full context (32 words) gets saved on task's stack, and context of tn_timer_task is restored;
  • tn_timer_task disables interrupts, performs its not so big job (manages tn_wait_timeout_list), puts itself to wait, enables interrupts and pends context switching again;
  • CS0 ISR is immediately called, so full context of tn_timer_task gets saved in its stack, and then, after all, context of my own interrupted task gets restored and my task continues to run.
I've just measured with MPLABX's stopwatch how much time it takes: with just three tasks (idle task, timer task, my own task with priority 6), i.e. without any sleeping tasks, all this routine takes 682 cycles.
So I tried to get rid of tn_timer_task and perform its job right in the tn_tick_int_processing(). When system starts, application callback is therefore called from idle task. Now, the following steps are performed when timer interrupt is fired:
  • Current context (23 words) is saved to the interrupt stack;
  • ISR called: particularly, tn_tick_int_processing() is called;
  • tn_tick_int_processing() disables interrupts, manages round-robin (if needed), manages tn_wait_timeout_list, and enables interrupts back;
  • tn_tick_int_processing() finishes, ISR macro checks that tn_next_task_to_run is the same as tn_curr_run_task
  • Context (23 words) gets restored to whatever task we interrupted;
That's all. It takes 251 cycles: 2.7 times less.
So, why do we need timer task? It seems, things work better without it.
Well, timer task could make sense if we don't use separate interrupt stack: then it might be needed to save stack RAM, because task_wait_complete() gets called, which calls other functions. But if we have separate interrupt stack, it's worth to get rid of timer task, and just make sure that interrupt stack size is enough for this job.
Or, probably I've missed something?
By the way, here's a bitbucket page of my TNKernel port forked from your one. (the changes related to timer_task aren't pushed there yet, I'm performing tests on it now)
post edited by dimonomid - 2014/08/30 05:04:25
#45
andersm
Super Member
  • Total Posts : 2564
  • Reward points : 0
  • Joined: 2012/10/07 14:57:44
  • Location: 0
  • Status: online
Re: TNKernel 2014/08/30 07:42:16 (permalink)
3 (1)
Based on comments in old versions of TNKernel, the main loop of the timer task didn't always run with interrupts disabled. Today, the task does seem unnecessary. The app init callback could just as well be moved to the idle task (though its required stack space would then depend on the application).
#46
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/08/30 08:30:11 (permalink)
0
andersmthough its required stack space would then depend on the application
Of course it does depend on the application now. With timer stack, its stack size depends on application, but now we can get rid of timer stack at all (minimum TN_MIN_STACK_SIZE words), and make idle task's stack size depend on app. RAM economy + speed things up significantly. Not bad.
 
By the way, why have you set TN_MIN_STACK_SIZE equal to 68? I thought it needs much less: 32 words for context + probably 4 words for stack frame.. Have I missed something?
#47
andersm
Super Member
  • Total Posts : 2564
  • Reward points : 0
  • Joined: 2012/10/07 14:57:44
  • Location: 0
  • Status: online
Re: TNKernel 2014/08/30 08:49:12 (permalink)
0
It's for the task expansion area in tn_task_exit and tn_task_terminate.
#48
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/08/31 04:38:09 (permalink)
0
andersmIt's for the task expansion area in tn_task_exit and tn_task_terminate.
Hmm, do these functions really need that much? Ok, I'll check probably.


I found a minor bug in the TNKernel itself. In the file tn.c, function tn_sys_tslice_ticks() looks as follows:
 
int tn_sys_tslice_ticks(int priority,int value)
{
   TN_INTSAVE_DATA

   TN_CHECK_NON_INT_CONTEXT

   tn_disable_interrupt();

   if(priority <= 0 || priority >= TN_NUM_PRIORITY-1 ||
                                value < 0 || value > MAX_TIME_SLICE)
      return TERR_WRONG_PARAM;

   tn_tslice_ticks[priority] = value;

   tn_enable_interrupt();
   return TERR_NO_ERR;
}

 
So, if wrong params were given, TERR_WRONG_PARAM is returned, and interrupts remain disabled. I believe we should just call tn_disable_interrupt() after the check for params validity, since interrupts aren't necessary to be disabled for that check.
I've already written to the upstream (to Yuri Tiomkin), despite the fact that he doesn't care much (because it didn't answer anything about terrible mess with MAKE_ALIG() macro). But, just in case, I've written to him anyway.
#49
andersm
Super Member
  • Total Posts : 2564
  • Reward points : 0
  • Joined: 2012/10/07 14:57:44
  • Location: 0
  • Status: online
Re: TNKernel 2014/08/31 06:17:53 (permalink)
0
dimonomidHmm, do these functions really need that much? Ok, I'll check probably.

The intent is to allow the stack to be re-initialized without the context switch interfering. However, since a task is not allowed to terminate itself, the expansion area should not be needed in tn_task_terminate. If you modified tn_task_exit so that the stack wasn't initialized there, but when the task is re-activated, you could remove it completely. However, that would change the API, especially for tn_task_iactivate.
 
So, if wrong params were given, TERR_WRONG_PARAM is returned, and interrupts remain disabled.

Yep, good catch. It's partly for reasons like this I personally favour the "one entry point, one exit point" strategy. (Of course, on a PIC32 that routine doesn't really need to disable interrupts at all.)
post edited by andersm - 2014/08/31 06:21:26
#50
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/08/31 06:23:49 (permalink)
0
andersmIt's partly for reasons like this I personally favour the "one entry point, one exit point" strategy.
Exactly! I already wrote that I don't like TNKernel sources because this rule is violated across all the sources. That's why I'm refactoring it agressively: it's full of code like this:
int my_function(void)
{
   tn_disable_interrupt();
   //-- do something

   if (error()){
      //-- do something

      tn_enable_interrupt();
      return ERROR;
   }
   //-- do something

   tn_enable_interrupt();
   return SUCCESS;
}

The code above is simplified; in the real TNKernel code things are usually more complicated.
If you have multiple return statements or, even more, if you have to perform some action before return (tn_enable_interrupt() in the example above), it's great job for goto:
int my_function(void)
{
   int ret = SUCCESS;
   tn_disable_interrupt();
   //-- do something

   if (error()){
      //-- do something
      ret = ERROR;
      goto out;
   }
   //-- do something

out:
   tn_enable_interrupt();
   return ret;
}

I understand there are a lot of people that don't agree with me on this (mostly because they believe goto is unconditionally evil). And yeah, original TNKernel code contains zero goto statement occurrences, but a lot of code like the one in first example above. So, if I look at the code and I want to rewrite it, I don't deny myself the pleasure.
#51
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/08/31 07:10:11 (permalink)
0
By the way, the more I get into TNKernel, the less I like it. But probably I'm wrong on something? I'd be glad to hear your opinion. Here's what I got for now:
  • Is it really good idea that task does have wakeup_count, so you can call tn_task_wakeup() before tn_task_sleep(), and on the next call to tn_task_sleep() task won't go to sleep? It seems just like dirty hack to prevent race conditions involved by poor programmer. This dirty hack makes programmers able to not provide proper synchronization, this is really bad idea IMO.
  • Is it really good idea that we are able to suspend any non-current task? It should be totally up to the task itself when it should sleep and when it shouldn't. If we need to pause some task from outside, we should implement a message to this task, and then task should put itself to sleep.
  • Even more, is it really good idea that we are able to terminate any non-current task? More, tasks in TNKernel don't even have any callback like "destructor" that should be called when task is terminated. So any non-TNKernel resources (such as memory from heap) that might be held by task will leak.
  • Code organization in general is far from perfect: this is just one header file with lots of stuff, including some private-to-tnkernel definitions like task_to_runnable, task_wait_complete etc.
And I even don't like the fact that it has SUSPENDED state. It just complicate a lot of TNKernel code for no good reason. Why would one ever suspend task instead of putting it to sleep? It looks strange to me, I've never used it for 3-4 years of working with TNKernel on various projects.
Well, I don't even have to terminate tasks, but I can imagine situations where it would be useful.
 
After all of this, I want to change too much of TNKernel. I even think that probably I should roll out my own simple RTOS that would not have things that tempts programmer to go with some bad approaches.
 
post edited by dimonomid - 2014/08/31 07:13:17
#52
andersm
Super Member
  • Total Posts : 2564
  • Reward points : 0
  • Joined: 2012/10/07 14:57:44
  • Location: 0
  • Status: online
Re: TNKernel 2014/08/31 07:47:40 (permalink)
0
The functionality provided by the kernel is pretty standard, and most of the issues you raise are application design issues. Though your complaint about the task wakeup count is kind of ironic, considering your view on recursive mutexes...
 
The simple structure of TNKernel is one of the things I like about it, and why I ported it instead of going with one of the myriad of other kernels. It's not perfect, and it's not the best option for every purpose, but it has worked well enough for me.
#53
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/08/31 08:17:22 (permalink)
0
andersmIt's not perfect, and it's not the best option for every purpose, but it has worked well enough for me.
Yeah it worked for me too, and it's probably the best one that I've found so far, but I really want RTOS code to be a kind of perfect. Nevermind.
 
andersmThough your complaint about the task wakeup count is kind of ironic, considering your view on recursive mutexes...
Recursive mutexes have nothing in common with that wakeup count, actually. They are useful if some module's public function (that should lock mutex) calls other lower-level public functions of the same module (that also should lock mutex). Without recursive mutexes, we have to provide some non-locked private versions of functions, and public functions are just wrappers around them. It's not always convenient. And, recursive mutexes have no depth limitation: they may be locked as many times as needed.
 
After all, there are third-party libraries that are written with recursive mutexes in mind (because they are common in our world), say, this nice memory manager that is a lot better than PIC32-default malloc http://hempeldesigngroup....stories/memorymanager/
 
If you look closely on this wakeup count, you'll see that maximum value of it is 1. So, some other task_A may call tn_task_wakeup(task_B) 10 times, and after that task_B calls tn_task_sleep() first time, it won't sleep. At second call, it will.
Well, even without this maximum number of 1, it is just plain dirty hack. Could you please provide example when this tn_task_wakeup/tn_task_sleep "feature" is more convenient than straightforward queues or events?
 
post edited by dimonomid - 2014/10/01 16:46:29

The story: How I ended up writing new real-time kernel: TNeo.
Microchip data sheet finder: allows to easily get latest revision of the datasheet by its id like "DS61118".
#54
andersm
Super Member
  • Total Posts : 2564
  • Reward points : 0
  • Joined: 2012/10/07 14:57:44
  • Location: 0
  • Status: online
Re: TNKernel 2014/08/31 09:13:29 (permalink)
0
dimonomidRecursive mutexes have nothing in common with that wakeup count, actually.

They are both features that can be abused, and that some think lead to bad software design. The wakeup/sleep feature seems to be inherited from µITRON, and I guess the purpose is to avoid having to allocate a queue, event or semaphore. The low wakeup count ceiling does make it less useful, and the comments in the TCB declaration suggests that the suspend/wakeup/activate counts are for statistics, which is obviously not true.
#55
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/08/31 16:13:50 (permalink)
0
Well, probably it's a matter of taste. To me, recursive mutexes are much less hacky by design and much more useful than wakeup_count tricks.
Probably I'm wrong though.
#56
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/09/04 08:02:42 (permalink)
0
andersmThe intent is to allow the stack to be re-initialized without the context switch interfering. However, since a task is not allowed to terminate itself, the expansion area should not be needed in tn_task_terminate. If you modified tn_task_exit so that the stack wasn't initialized there, but when the task is re-activated, you could remove it completely. However, that would change the API, especially for tn_task_iactivate.
Hmm, I'm confused. I see that stack_exp is defined in both tn_task_exit() and tn_task_terminate(), but it is not used at all. I just grepped across the sources, no occurrences except this definition. tn_stack_init() is called exactly as it is in tn_task_create(), so, task is initialized as usually. Have I missed something? How does that stack_exp get used?


And just FYI: together with almost totally re-writting TNKernel, I'm implementing detailed unit tests for it, to make sure I didn't break anything; but I've found several bugs in original TNKernel:
  • If low-priority task_low locks mutex M1 with priority inheritance, high-priority task_high tries to lock mutex M1 and gets blocked -> task_low's priority elevates to task_high's priority; then task_high stops waiting for mutex by timeout -> priority of task_low remains elevated. The same happens if task_high is terminated by tn_task_terminate();
  • tn_mutex_delete() : mutex->holder is checked against tn_curr_run_task without disabling interrupts;
  • tn_mutex_delete() : if mutex is not locked, TERR_ILUSE is returned. I believe task should be able to delete non-locked mutex;
  • if task that waits for mutex is in WAITING_SUSPENDED state, and mutex is deleted, TERR_NO_ERR is returned instead of TERR_DLT
All of them are, of course, fixed in my project.
post edited by dimonomid - 2014/09/04 22:48:22
#57
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/09/04 08:09:47 (permalink)
0
Oh nevermind about stack_exp, I got it.
I was trying to figure that out for twenty minutes, then I decided to ask you, and after I asked, I understood. Huh.
#58
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/09/04 08:49:26 (permalink)
0
And, after all.. I see that tn_task_exit() gets its job done with interrupts disabled. That actually means that nobody can interfere while stack is being re-initialized. So, we don't need for stack expansion area. Or, am I wrong?
#59
dimonomid
Super Member
  • Total Posts : 217
  • Reward points : 0
  • Joined: 2011/02/24 02:46:17
  • Location: 0
  • Status: offline
Re: TNKernel 2014/09/04 09:52:23 (permalink)
0
It seems there is a mistake in tn_switch_context_exit routine:
tn_switch_context_exit:
    lw $k1, tn_curr_run_task /* = tn_next_task_to_run */
    j tn_restore_context
    lw $sp, 0($k1) /* load new task SP */
    .end tn_switch_context_exit

It restores context of tn_curr_run_task, but actually it doesn't point to new task to run; we need to set tn_curr_run_task to tn_next_task_to_run before. Just like in tn_start_exe, but without setting tn_system_state.
post edited by dimonomid - 2014/09/04 14:12:04
#60
Page: < 123 Showing page 3 of 3
Jump to:
© 2019 APG vNext Commercial Version 4.5