The real end of Moore's Law and the true cost of a monopoly

Jay Goldberg

Posts: 92   +2
Staff
Editor's take: If TSMC raises its prices as high as we are hearing they intend, then many companies will have no choice but to step off the curve of Moore's Law. Maybe having an alternative like Intel is not such a bad idea.

In recent weeks we've been hearing about some of the proposed price increases coming for TSMC's N2 process starting next year. We have been thinking through the implications of this since then, and in light of the developments at Intel, we believe their significance has become even more relevant.

Editor's Note:
Guest author Jonathan Goldberg is the founder of D2D Advisory, a multi-functional consulting firm. Jonathan has developed growth strategies and alliances for companies in the mobile, networking, gaming, and software industries.

Put simply, in the absence of viable competition, TSMC transitions from being an 'effective' leading-edge monopolist to a true monopolist. This allows them to raise prices as high as they want. Work through the math on that, and it quickly becomes apparent that many companies designing chips at the leading edge today will have to step off the Moore's Law curve because it is no longer economically viable.

Of course, TSMC is not going to raise prices to infinity and cut off all demand, but they will price to maximize their own value extraction. This will likely lead to a much smaller pool of customers who can afford to design chips at the leading edge.

Let's use an example. Imagine a sizable TSMC customer – not in the Top 3, but maybe in the Top 10. They likely pay TSMC $20,000 per wafer today, with lower-volume customers paying closer to $25,000. Let's say this company has a chip that is 170 mm². Using the handy Semi-Analysis Die Yield Calculator, that works out to 325 chips per wafer, or $61 per chip. If the company prices the chip at $140, they achieve gross margins of 55%, which is good but not great.

Now suppose TSMC raises its price to this customer to $40,000 for its next process. Estimates for density improvements for N2 are still coming in, but let's assume a 15% increase in die per wafer (375 KGD). The cost per chip, however, jumps to $107. This is the heart of the Moore's Law slowdown – density increases now greatly lag price increases. If the design company cannot pass on cost increases to customers, and is stuck at that $140 price, gross margins fall to 22%, which is not good.

We can play around with the numbers and debate the extent to which chip designers can pass on these costs to their customers, but the conclusion remains the same: as TSMC raises prices, producing chips at the leading edge becomes increasingly unfeasible for a growing segment of customers.

The example above is loosely based on Qualcomm, so they would fall into this category, but the same applies to AMD. The hardest hit will be customers with smaller volumes, spanning from start-ups to hyperscalers. For many, Moore's Law becomes extremely challenging. Of course, Nvidia and a few other companies have significantly more flexibility to absorb these costs, but many – if not most – companies do not.

We expect that TSMC is unlikely to push its customers this hard, but the reality is that they could.

Some might argue that TSMC has effectively held a monopoly for several years and could have raised prices in this way long ago. The fact that they haven't suggests they won't in the future. However, conditions are changing.

Until recently, a cautious and paranoid TSMC needed to worry about Intel or Samsung becoming competitive again. That now seems increasingly unlikely. And this is why Intel Foundry matters. Today, some may credibly argue that there is no commercial necessity for Intel Foundry in the industry – that customers do not need a second source beyond TSMC.

But look ahead a few years, to a world where TSMC can freely raise prices. In that scenario, everyone will be desperately searching for an alternative.

Masthead credit: Fritzchen Fritz

Permalink to story:

 
Now we see why AMD's chiplet strategy pays off. Instead of a 170-200mm2 monolithic cpu they have a 70mm2 compute die and 130mm2 io-die. They build the compute die on a fairly cutting edge node and the io-die on a trailing node. sure, they incur some cost related to assembly but save a lot on wafer cost. plus they have more flexibility in building multicore chips from the same building blocks. So, if a good zen die costs $25, a next gen compute die on the next node would be $37. I think it's not that bad for them.
 
Until the transition to 14nm and beyond, the cost per wafer remained relatively stable despite significant density improvements. Plus, the table above does not account for the full range of costs involved in chip design:

Back in 2018, the last time anyone made such an estimate, IBS published the chart shown in figure 1. This pegged the cost of a 5nm chip at $542.2M. https://semiengineering.com/what-will-that-chip-cost/

 
Now we see why AMD's chiplet strategy pays off. Instead of a 170-200mm2 monolithic cpu they have a 70mm2 compute die and 130mm2 io-die. They build the compute die on a fairly cutting edge node and the io-die on a trailing node. sure, they incur some cost related to assembly but save a lot on wafer cost. plus they have more flexibility in building multicore chips from the same building blocks. So, if a good zen die costs $25, a next gen compute die on the next node would be $37. I think it's not that bad for them.

Funny how Radeon 7000 failed when it comes to MCM and they go back to Monolithic with Radeon 8000 then.
 
What you are going to see is more and more companies just backing off the leading edge of chip design. Because lets face it: In the consumer space, is there *really* that much of a difference?
 
Funny how Radeon 7000 failed when it comes to MCM and they go back to Monolithic with Radeon 8000 then.
The MCD has the same problem that the I/O dies on ryzen CPUs have when handling high speed memory. It's just that graphics memory is significantly faster(and more important). Since the same problem is present on Ryzen CPUS, they're not planning on moving back to a chiplet GPU design until after they fix the memory speed issues on the I/O die with zen 6.
 
Funny how Radeon 7000 failed when it comes to MCM and they go back to Monolithic with Radeon 8000 then.
Not that funny if you consider the number of distinct dies and the area of the io/cache dies vs the area of the GPU. Also they used different packaging that was probably more expensive.
 
Do you know how much a taiwan foundry worker or EE engineer earns? Its not much (1/4 of what western designers earn). The free ride for western chip designers is over.
 
Humanity sure does love putting important things in one basket, then acting surprised when the basket gets too powerful and bitchslaps us.
Yeah, that Russian gas drama was pretty interesting lol ;p And many other things like this. Even my friends put all their info on 1 drive and then when that drive dies... (no backups!) guess what happens :p

It must take a genius to think about diversity and backups in case something happens. I rarely see people doing that too, so it must be a really special thing.
 
The MCD has the same problem that the I/O dies on ryzen CPUs have when handling high speed memory. It's just that graphics memory is significantly faster(and more important). Since the same problem is present on Ryzen CPUS, they're not planning on moving back to a chiplet GPU design until after they fix the memory speed issues on the I/O die with zen 6.
High speed memory with mediocre timings is useless anyway. Who cares.
I run 6400/28 on my 9800X3D at 1:1 which is preferable for best performance anyway.

You can put fast CUDIMMs (with slow timings) in Arrow Lake, spend 800+ dollars on memory alone, and I will still wipe the floor with it.

If I needed actual produtivity perf, I would buy 9950X or Threadripper over anything Intel right now as well.

High speed memory is waste of money for most people and gamers want tight timings over clockspeed anyway. You don't get both and you get to pay big money for high speed kits even if timings are subpar. Worst money you can spend.
 
Last edited:
28nm is still widely used, and available. It's only for those who need the latest, cutting edge nodes to maximize performance.
 
Back