Re: 100% event driven programming vs CPU hogging
Typically, if you have something which needs fast reaction (such as receiving fast UART), I use only one interrupt - this saves time where it matters - at the beginning on the interrupt - you can drop all code which detects what kind of interrupt has happened. I wouldn't be able to use other interrupts anyway because this would increase the latency for my fast interrupt.
On PIC16, if there's no need for the fast interrupt, I usually don't use many interrupts. Often, I don't use interrupts at all - just a big loop checking all the tasks. What's the point for the interrupt when your ISR code must go through the list to figure what kind of interrupt has happened?
This is different on PIC24 where interrupts are vectored and there are multiple interrupt levels. Thus you can have a fast interrupt, and also number of other lower-priority interrupts, which can be put in order. Here interrupt-driven development makes much more sense. In combination with DMA, it leaves only very slow maintenance tasks in the main loop.