Most small-scale embedded applications - whether control or data logging oriented - structure their software (broadly) as shown on the right.
A master timer periodically (1mSec - 100mSec) wakes up the main loop processing. This gathers the various inputs, does the processing and sets all of the outputs. The main loop then goes to sleep ready for the next tick of the timer.
This approach works well for simple applications but becomes increasingly problematic as (inevitably) the complexity of the application increases.
The rate of the timer tick will normally be set to achieve the desired dynamic response from the controller - or by the maximum allowable latency (input to output delay that can be tolerated).
Most designs aim to achieve the best control/resolution so that the rate of the timer tick is set to be as high as possible. This limits the amount of time available for the processing. As the complexity and sophistication of applications grows the tension between the need for more time to complete the processing load and the need for less time to improve the response is bound to increase.
As application become more complex the problem is further complicated by the fact that any processing must complete before the next one can begin. Even if the software can cope the control algorithms are likely to thrown into disarray if the controller occasionally misses a beat.
Calculating the worst case path through the processing is, for all but the simplest applications, nearly impossible and any testing regime is likely only to yield information on the normal, quiescent, timings is nearly impossible. Guaranteeing that processing will always complete in the tick interval means applying ever larger margins over the normal processing times - needing an even faster processor.