Hello folks! In this post, I'm gonna talk about the difference between two commonly used Static Timing Analysis methodologies, namely- the Graph Based Analysis and the Path Based Analysis.
I shall explain the difference with help of an example, shown below:
Now, we have two slews- fast and slow. In Graph Based Analysis, the worst slew propagation is ON, and the timing engine computes the worst case delays of all standard cells assuming the worst case slew for all the inputs of a gate. For example, assuming we need to compute the gate delays while doing setup analysis in a graph-based methodology for the path from FF1 to FF2:
- The delay of the A-> Z (output) arc of the OR gate (in brown) would be computed assuming the real slew slew, i.e. slew at pin A.
- However, the slew that will be propagated to the output pin of the OR gate would be the worst slew, which in this case would be computed taking into account the load at the output of the OR gate and slew at B.
- Similarly, the delay of NAND gate (in blue) would be computed using the propagated slew coming from the previous stage i.e. the slew at pin B, but the slew that is propagated to the output would be according to the worst input slew, in this case slew at A.
- And so on and so forth..
While performing hold analysis in a graph-based methodology, the situation reverses, the the delays of all cells would be computed assuming the best propagated slews (fast slews) for all nodes along the timing path!
This method of timing analysis is faster and uses lower memory footprint because the engine has to simple keep a tab of worst propagated slews for every pin in the design. This surely is pessimistic but again faster and therefore does not encumber the optimization tool by bounding the problem. For example, for the OR gate, the slew propagated to it's output is the worst slew, therefore the delays of subsequent gates after the OR gate could be pessimistic. The Path-based analysis comes to the rescue at some cost.
In Path-based analysis, the tools takes into account the actual slew for the arcs encountered while traversing any particular timing path. For example for the path shown above from FF1 to FF2, the arcs encountered are- A-> Z for OR gate; B-> Z for NAND gate; B-> Z for XOR gate and A-> Z for the inverted AND gate.
The tool would therefore consider the actual slews and this dispenses with the unnecessary pessimism!
Why not use PBA instead of GBA? Who's stopping us?
The answer is the run-time and memory foot-print. Since, PBA needs to compute the delays of standard cells in cognizance with the particular timing path, it incurs a penalty on the run-time to compute the delays, as opposed to GBA where the worse propagated slew was being used to compute the delays. In a nutshell, PBA is more accurate at the cost of run-time.
Typically, design engineers tend to use GBA for majority of the analysis. However, for the paths with a small violation (maybe of the order of 10s of ps) may be waived off by running PBA for the top-critical paths when the tape-out of the design is impending. One might argue that the extra effort spent in optimizing many other paths might have been saved had we used PBA earlier. And it is true! But like any engineering problem, there exists a trade-off and one needs to take a call between fixing the timing and a potential risk of delaying the tape-out!