April 15, 2017

Tuning CTS Recipe

I've been trying to debug and tune my CTS recipe for quite some weeks now, and this gave me the basic insight into the CTS algorithm, various knobs available to the designers to be able to tune their CTS results to achieve the desired skew, transition and latency targets.

In this blog post, I'll discuss about those knobs while trying my best not to go into tool specific commands/constructs to be able to keep the discuss more conceptual and tool independent. Before we delve any deeper into these knobs, let's ask the basic question first: why do we need CTS to begin with, or what goals do we expect CTS to achieve for us? The answer is to be able to create a balanced clock tree. A balanced clock tree would simply mean: minimum skew between your sequentials in the design (of course we would only be interested in skew within the same clock group. Let me know in comments if this part is not clear). In addition to minimizing the skew, we would also like to achieve minimum latency by adding minimum number of clock buffers on the clock path thereby ensuring  lesser area, lesser routing congestion and most importantly no extra dynamic power dissipation!

Now, we have the required background to discuss the CTS knobs in detail! :)

1. Creating Skew Groups: Skew groups are basically groups of sink-pins (clock end-points) which need to be balanced against each other. Now, some skew groups may be default, some might need to be created explicitly to help CTS engine. We'll take a look at some use-cases.
Default skew groups: Let's say you have 5 clocks in your design. 
Group1: CLK1, CLK2 and CLK3 are synchronous to each other.
Group2: CLK4, CLK5 are synchronous to each other.

Group1 and Group2 are logically exclusive and therefore clocks within each group are implicitly asynchronous to the clocks in other group.
In this case, by defining clock groups, we have implicitly defined skew groups. CTS engine would try and balance latencies of CLK1, CLK2 and CLK3. And independently try and balance clock latencies of CLK4 and CLK5.

Sometimes, however, designers might want to create some explicit skew groups on top of the implicit ones. Let's take a look at those use-cases.



The figure highlights the sequential cloud of devices working on CLK1, CLK2 and CLK3 respectively. Assume there's heavy traffic and interaction between CLK1 and CLK2 sequentials while only a very few sequentials working on CLK3 interact with those working on CLK1 and CLK2. Clock enters the partition via three different clock ports on the left side, and certainly distance between the CLK3 port and CLK3 sequentials is the largest, thereby CTS engine would need to insert more clock buffers to maintain the transition (Ask yourself why? What would be the caveat if clock transition goes bad? Puzzle: Clock Transition). Assuming average latency that CTS can manage for CLK3 sequentials is 150 ps, while for CLK1 and CLK2 sequentials, it's 100 ps. In order to balance these three clocks, it will push the clock latency for CLK1 and CLK2 sequentials to match that of the longest latency: 150 ps. If, as designers, we know that interaction between CLK3 sequentials and CLK1, CLK2 sequentials is not too much, or even if it's too much, we know from timing perspective (both hold and setup) we have sufficient positive slack, we don't really need to balance these three clocks. We can create a separate skew group for CLK3 sequentials thereby preventing the extra latency  on CLK1 and CLK2 buffers. This would help us in minimizing clock tree buffers, the associated area, routing resources, power and perhaps even the detrimental impact of OCVs on the uncommon clock path. (Read the post: Common Path Pessimism for greater insight).

Another case could be let's say a hard IP in your design which is placed far away from rest of the sequentials working on the same clock. And you know that there's minimal interaction between the sequentials and hard IP, you might need to create a separate skew group for the hard IP clock pin.



2. Sequential Clustering: (Different from Register Banking) CTS is performed after the placement step and by that time all the sequentials and standard cells have been placed. And this placement of sequentials is invariably driven only by the data path optimization constraints. In other words, placement engine would place sequentials at locations which it finds convenient to meet timing assuming ideal clock distribution. As depicted in the figure below, for some reason, placer decided to place a small bunch of sequentials working on CLK1 far away from the port thereby threatening to shoot up the clock latency of all the CLK1 sequentials. Now, either you can try and create a separate skew group to decouple these sequentials, or you can re-run placement tightly bounding all CLK1 sequentials togther to prevent latency (and hence clock skew) shoot-up.



3. Clock Ordering and "dont touch subtree": You might have cases in your design where there's clock multiplexing, let's say between functional and scan clocks, and you need to create a clock tree for both of them. compile_clock_tree usually works on a clock by clock basis. Let's say you were smart enough to enforce the order to command CTS engine to build the CTS network for fast functional clock first and then for the slower scan clock. That's a reasonable approach considering skew, transition and latency targets would be more difficult and constrained to meet for faster clocks, and by building the CTS for faster clocks first, you are giving the engine the leeway to do it's best possible job. However, when it will try and balance the network for scan clocks, it can touch the functional clock network as well. One key difference between functional and scan clocks, in addition to the difference in clock frequencies, would be the scan clock would have a greater fan-out than the functional clocks and therefore more scope for the CTS engine to goof-up! To prevent this, we need to do two things:

a) Enforce CTS order to construct the clock tree for faster clocks first and slower clocks next
b) In order to prevent slow clock from altering the clock tree network of fast clocks, we need to apply a dont_touch_subtree exception on the MUX input of the slower clock. 


4. Divided Clocks and "stop_pins": By default, all the sequentials which are flop-based dividers, their CLK is treated as a default "non-stop-pin". Meaning CTS would consider clk -> out arc of these divider flops to be a "through-pin" and try to balance the latencies of the master clock and the generated clock. Now, consider the case as shown below. There are many ways to solve the problem and which of the two methods give you better results would depend on the design:



a) Creating a different skew group for the sequentials placed far away. This would de-couple the sequentials placed nearby and the ones placed far away. And CTS engine would be able to do a decent job.

b) Another experiement well worth a shot could be defining at CLK pin of the divider flop as a "stop_pin" so that latency of the master clock would be in check considering it will treat all it's sequentials including the divider flop as one group and do a relatively good job in balancing out these sequentials. This would avoid latency shoot-up of the master clock.

5. Exclude Clock from CTS: If there are two clocks defined at the same pin/port with different clock periods, whether they be synchronous or asynchronous, it might be a good idea to exclude the slower clock from CTS all-together to prevent CTS from touching the same clock network twice and surprising you with the results.

6. Clock used as data and "exclude pins": You might have some cases where clock is being used as data inside your design. CTS engine would be oblivious of this fact and might go crazy while building the clock tree. In these cases, it would be a good idea to explicitly mark the beginning of data path as "exclude_pin" to guide CTS engine to exclude anything further from clock tree balancing!


I couldn't think of any more cases. If you have some interesting use cases that I might have missed, kindly share them in the comments. :)


March 02, 2017

Simultaneous Setup-Hold Critical Node

I've got this question multiple times- How do we fix timing violations on paths that have at least one node which is both setup critical and hold critical simultaneously. To answer that question, one must realize that (generally speaking) for the same PVT and same RC corner, there cannot be paths where all nodes are simultaneously setup and hold critical. 

Let's take an example:
Test Case

Now, if we buffer at node C, path from B to C which was already setup critical will start violating.

Buffering at C

If we buffer at Node A, the path from A to D which was already setup critical would start violating.

What shall we do here now? Any suggestions? Thoughts? I'd like to hear from you and I'll post the right answer (at least one of the right answers soon!). Just like always, looking forward to engage in the comments section below. 

March 01, 2017

OCV v/s AOCV


When I had started my career around 6 years back, we were introduced to the term called OCV. While the OCV concept was quite simple and fascinating, it didn't me long to realize that OCV can be a nightmare for every STA engineer out there. I had introduced OCV long time back while explaining the difference between OCV v/s PVT. In this post, I intend to draw a distinction between OCV (On-Chip Variation) and AOCV (Advanced On Chip Variation).

Before we discuss anything about OCVs, it would be prudent to talk about the sources and types of variations that any semiconductor chip may exhibit.

The semiconductor device manufacturing process exhibit two major types of variations:

  • Systematic Variations: As the name suggests, systematic variations are deterministic in nature, and these can usually be attributed to a particular manufacturing process parameter like the manufacturing equipment used, or perhaps even the manufacturing technique used. Systematic variations can be experimentally calibrated and modeled. They also exhibit spatial correlation- meaning two transistors close to each other would exhibit similar systematic variation- which makes them easier to gauge. Example would be inter-chip process variations between two different batch of manufactured chips.
    When a certain technology is in its nascent stage (let's say 10-nm technology), the process engineers would typically be more concerned about these variations and as the technology matures, process engineers are able to calibrate and tune their manufacturing process to reduce this variation component.
  • Random Variations: These are totally random, and therefore non-deterministic in nature. Random variations do not show spatial correlation and therefore very difficult to gauge and predict. Unlike systematic variations, random variations usually have a cancelling effect owing to their random nature. Examples are subtle variations in transistor threshold voltage.
As the semiconductor node shrinks, the susceptibility to the variations increase. And the effect of these variations need to be taken into account while doing timing analysis, or perhaps during the overall design planning to some extent. Shifting our focus back to OCV and AOCV. At this time one may ask themselves in what form would these variations manifest themselves? Well, these variations can manifest themselves in form of increase or decrease in the threshold voltage of devices, shift the process of the manufactured devices, perhaps vary the oxide thickness or change the doping concentration..
There might be infinite such manifestations and we engineers like to make our lives easier, don't we? ;)
Experienced folks must have guessed where am I headed. If you haven't guessed it yet, stay with me, take a step back and what does all these parameters have in common? What's that one quantifiable metric that these will impact and the answer is the delay! OCV and AOCV are essentially models which guide us on how the cell delay varies in light of the systematic and random variations.

On-Chip-Variations (OCV): OCVs are simplistic and (generally) pessimistic view of modelling process variations. Here we use that the delay of all cells can show, let's say X% variation in their delays. Now you would either model this variation as -X% to +X%, or perhaps -(X/2)% to +(X/2)%. Let's say we choose the latter. Now we would model the delay of all cells and subject them to OCVs in a manner that our timing becomes pessimistic and we can claim that in the worst case, as long as process guys can ensure that the variation would be within the bracket of -X% to +X%, we'd be safe.

  • Setup Analysis under OCV: In order to make setup analysis immune to process variations on silicon, we need to model the OCVs such that setup check becomes more pessimistic. That would be the case if we increase the data path delay by X% (you can take a call whether or not to apply a derate on the net delays. One can choose to apply a net derate based on the net length, and the metal layer in which the net is routed, a separate discussion for a separate post! :)); increase the launch clock path delay by X% and decrease the capture clock path delay by X%. Here you might want to check the post on Common Path Pessimism to see what type of clock path cells need to be exempted from OCVs.
Setup Analysis under OCV
  • Hold Analysis under OCV: Hold check would be the exact opposite of what we did for setup, namely decrease the data path delay by X% (you can take a call whether or not to apply a derate on the net delays. Usually, we don't apply derate on net delays); decrease the launch clock path delay by X% and increase the capture clock path delay by X%.
Hold Analysis under OCV

We talked so much about spatial correlation, then inherent cancellation of random variations but didn't use either of these concepts while explaining OCVs. This is the precise reason OCVs tend to be generally pessimistic. And as we shrink the technology node, a need arises for an intelligent methodology to perform variation aware timing analysis. And the answer is AOCV.

Let's take a look at AOCV in detail:

Advanced On-Chip Variations (AOCV): AOCV methodology hinges on three major concepts:
  • Cell Type: Variations should take into account the cell-type. Surely an AND gate an an OR gate can't exhibit the same variation pattern. Nor could an AND3X and an AND6X cell. The impact of variation should be calculated for each individual cell.
  • Distance: As the distance in x-y coordinates increase, the systematic variations would increase and we might need to use a higher derate value to reflect the uncertainty in timing analysis to mitigate any surprises on silicon.
  • Path Depth: If within a given distance, path depth is more, the impact of systematic variations would be constant, but the random variations would tend to cancel each other. Therefore as the path depth increases (within the same unit distance), the AOCV derates tend to decrease.
Bounding Box Creation for AOCV


While performing reg2reg timing analysis, AOCV methodology finds the bounding box containing the sequentials, clock buffers between two sequentials and all the data cells. Now within a unit distance, if the path depth increases, the AOCV derate decreases due to cancelling of random variations. However, if the distance increases, AOCV derates increases due to increase in the systematic variations. These variations are modeled in form of a LUT.

Sample AOCV Table for Setup Analysis

Now some final comments for OCV vs AOCV. 

  • For small path depths, OCV tends to be more optimistic than AOCV. (AOCV is more accurate).
  • For higher path depths, OCV tends to be more pessimistic than AOCV. (AOCV is still more accurate).
I hope you were able to draw the above inference. If not, I'd be willing to engage in discussion down in the comments section. See you all till next time! :)

February 05, 2017

Power Domain Crossings

With all the fuss about low power designs, the implementation of multiple power domains has gained significant traction in the past decade and they still play a critical role in designing the chips which are less power hungry. The use-cases have become more complicated and keeping up with the pace, the implementation techniques have become even more intricate! 

In this post I intend to talk about the motivation for power domains, draw a distinction between two commonly confused terms- the power domain and the voltage area, and some basics which can help the newbies lay a foundation of what to expect while implementing such designs.

As we've already discussed, the basic intention to create multiple power domains is to reduce the power consumption. But how does it help? Let's take a look. Let's say your design has three IPs- Alpha, Beta and Gamma. The Alpha IP is the heart of the design and it's basically the computation engine- think of something like the ALU or the Execution Unit. Beta IP does house-keeping jobs, mostly in the support role. While Gamma IP does all the "dirty"stuff- think of something like analog IPs like the Power Management Unit. Now, one important thing to appreciate here is that these three IPs serve different purposes, at different times with a distinct level of significance. 

Let's dig a little bit deeper into this. Aplha IP does the most important task, it is critical for performance and therefore should run at the highest possible clock frequencies and therefore burns a significant chunk of the overall power consumption. Beta IP is not needed that often, and neither does any critical work, nor does it all the time! Gamma IP does the significant task of managing the power supplies and therefore needs to be always-on!

Having delineated the expectations for each IP, now it's time to delve into some technical details that would be of particular interest to a physical design engineer and let's try and understand the rationale behind it. This time we shall start from Gamma IP. Gamma IP being the analog IP (or at best a mixed-signal IP) usually operates on high voltage because analog circuits are usually more susceptible to noise. Let's say Gamma IP operates on 3V. (Unrealistic number, when it comes to modern VLSI design, but I intend to give you a relative context! :)) Beta IP is a digital IP therefore it can operate on a lower voltage, let's say 1V in our chip and it would be a switchable IP, meaning that we can turn-off the power to this IP when it's not in use, thereby saving some power. Alpha IP being the heart of the chip, operates at the highest frequency, operates on an intermediate voltage, let's say 2V in our example. Remember, while this IP would dissipate too much power, the performance is also critical for our design and if we lower the operating voltage of the IP, the delays of standard cells would increase, and timing closure on such high frequencies would be difficult and in the worst case, we won't be able to meet the timing spec. Also, there may be times where I don't need such high performance from the chip, and I may choose to either lower the frequency (Dynamic Frequency Scaling (DFS)), or lower the voltage (Dynamic Voltage Scaling (DVS)), or simply turn-off the power (Power Gating) or lower both the voltage and frequency (Dynamic Voltage Frequency Scaling DVFS)).

Now that we have the design perspective in place, let's go back to power domains.  How many power domains do we have here? And how do we decide? Any logical hierarchies (or a bunch of hierarchies) which have the same power plane should comprise one power domain. Let's elaborate more on that. Any logic (let's say here 2 IPs) are said to be in the same power plane iff they operate on the same voltage and have the same switchable properties.

In our example, all three IPs operate on different voltages, and therefore naturally comprise three different power planes and hence 3 different power domains! I shall elaborate on this concept later via examples. 

Power Domain View

Having understood the concept of a power domain, let's try and understand what is a voltage area. Power domain, as the definition suggests, refers to the logical view wherein we've classified every logical hierarchy and defined the properties (operating voltage, and power state table) for all the standard cells that eventually get synthesized under that logical hierarchy. Voltage area is nothing but the physical view of a power domain where we assign a physical area on the chip for the logic under a given power domain to sit physically on the chip! Confusing, I know, but read it again and you'd be able to grasp the difference. Voltage Area would always be characterized by the fact that you'd need to specify the "rectilinear coordinates" within your chip area to define one.

Voltage Area View

Let's also talk about which I believe would be the most interesting take-away from this blog. How do we handle signals crossing from one power domain to another- which leads us to the discussion of isolation cells and the level shifters, and perhaps enable level-shifters which are the combination of isolation cells and level shifters.
Any signal crossing from one a region or a power domain which is operating on voltage of let's say V1 to a power domain which is let's say operating at voltage V2 would need to be level-shifted from V1 to V2. And the special cells which help us achieve this are called level-shifters. Level-shifters are basically buffers with two power supply nets. The input supply net is connected to the voltage supply of the driver domain and the output supply net is connected to the voltage supply of the receiver domain. Where exactly should this LS be placed (whether in the driver domain or the receiver domain is another question which would be answered in a different post).

Level-shifters

Any signal crossing from a switchable domain to an on domain need to be isolated using an isolation cell to prevent any X's being propagated. Consider a signal going from power domain A (switchable) to power domain B (on). Let's say at some time the supply to be power domain is turned-off. All the outputs of power domain A would be at an unknown state and hence referred to as "X". If these X's are not isolated, the always-on supply would be corrupt with X. Most isolation cells are usually AND/OR gates where the other input is set to a controlling value (0 for AND gate, 1 for OR gate) to prevent X-propagation. We don't see any ISOLATION from the always-on domain to the switchable domain. (Ask yourself why?! :P)

Isolation Cells

On a side note: one physical voltage area may be mapped to many logical power domains provided they all have the same primary and secondary logical power supplies.

Now I'd like you to explain what all would we need for our case with Alpha, Beta and Gamma IPs to make sure signals cross from one power domain to another seamlessly (assume all permutation and combination of crossings).

Concepts to be discussed in a future post:
  • Where exactly should this LS be placed: whether in the driver domain or the receiver domain.
  • The concept of primary v/s secondary power supply.