Jump to content
Please support this forum by joining the SH Patreon ×
Sturgeon's House

The Future of PC Gaming Hardware: View from 2019


Recommended Posts

What a Long, Strange Trip it's Been

 

PC gaming has been a hell of a ride.  I say "has been" both in the sense that exciting and dynamic things have happened, but also in the sense that the most exciting and dynamic times are behind us.  No other form of video gaming is as closely tied to the latest developments in personal computing hardware, and current trends do not suggest that anything dramatically new and exciting is immediately around the corner.  Indeed, fundamental aspects of semiconductor physics suggest that chip technology is nearing, or perhaps already on a plateau where only slow, incremental improvement is possible.  This, in turn, will limit the amount of improvement possible for game developers.  Gaming certainly will not disappear, and PC gaming will also not disappear, although the PC gaming share of the market may contract in the future.  But I think it is a reasonable expectation that future PC game titles will not be such dramatic technological improvements over older titles as was the case in the past in the near term.  In the long term, current technology and hardware design will eventually be replaced with something entirely different and disruptive, but as always it is difficult, maybe impossible to predict what that replacement will be.

 

The Good Old Days

 

The start of the modern, hardware-driven PC gaming culture that we all know and love began with Id Software's early first person shooter titles, most importantly 1993's Doom.

 

PC gaming was around before Doom, of course, but Doom's combination of cutting edge graphics technology and massive, massive appeal is what really got the ball rolling.

Doom was phenomenally popular.  There were, at one point, more installs of Doom than there were installs of the Windows operating system.  I don't think there is any subsequent PC title that can claim that.  Furthermore, it was Doom, and its spiritual successor Quake that really defined PC gaming as a genre that pushed the boundaries of what was possible with hardware.

 

Doom convincingly faked 3D graphics on computers that had approximately the same number-crunching might as a potato.  It also demanded radically more computing power than Wolfenstein 3D, but in those days computing hardware was advancing at such a rate that this wasn't really unreasonable.  This was followed by Quake, which was actually 3D, and demanded so much more of the hardware then available that it quickly became one of the first games to support hardware acceleration.

 

Id software disintegrated under the stress of the development of Quake, and while many of the original Id team have gone on to do noteworthy things in PC gaming technology, none of it has been earth-shaking the way their work at Id was.  And so, the next important development occurred not with Id's games, but with their successors.

 

It had become clear, by that point, that there was a strong consumer demand for higher game framerates, but also for better-looking graphics.  In addition to ever-more sophisticated game engines and higher poly-count game models, the next big advance in PC gaming technology was the addition of shaders to the graphics.

 

Shaders could be used to smooth out the low-poly models of the time, apply lighting effects, and generally make the games look less like spiky ass.  But the important caveat about shaders, from a hardware development perspective, was that shader code ran extremely well in parallel while the rest of the game code ran well in series.  The sort of chip that would quickly do the calculations for the main game, and the sort of chip that would do quickly do calculations for the graphics were therefore very different.  Companies devoted exclusively to making graphics-crunching chips emerged (of these, only Nvidia is left standing), and the stage was set for the heyday of PC gaming hardware evolution from the mid 1990s to the early 2000s.  Initially, there were a great number of hardware acceleration options, and getting everything to work was a bit of an inconsistent mess that only enthusiasts really bothered with, but things rapidly settled down to where we are today.  The important rules of thumb which have, hitherto applied are:

-The IBM-compatible personal computer is the chosen mount of the Glorious PC Gaming Master Race™. 

-The two most important pieces of hardware on a gaming PC are the CPU and the GPU, and every year the top of the line CPUs and GPUs will be a little faster than before.

-Even though, as of the mid 2000s, both gaming consoles and Macs were made of predominantly IBM-compatible hardware, they are not suitable devices for the Glorious PC Gaming Master Race™.  This is because they have artificially-imposed software restrictions that keep them from easily being used the same way as a proper gaming PC.

-Even though they did not suffer from the same compatibility issues as consoles or Macs, computers with integrated graphics processors are not suitable devices for the Glorious PC Gaming Master Race™.

-Intel CPUs are the best, and Nvidia GPUs are the best.  AMD is a budget option in both categories.

 

The Victorious March of Moore's Law

 

Moore's Law, which is not an actual physical law, but rather an observation about the shrinkage of the physical size of transistors, has held so true for most of the 21st century that it seemed like it was an actual fundamental law of the universe.

 

The most visible and obvious indication of the continuous improvement in computer hardware was that every year the clock speeds on CPUs got higher.

 

IpyTkME.png

 

Now, clock speed itself isn't actually particularly indicative of overall CPU performance, since that is a complex interplay of clock speed, instructions per cycle and pipe length.  But at the time, CPU architecture was staying more or less the same, so the increase in CPU clock speeds was a reasonable enough, and very marketing-friendly indicator of how swimmingly things were going.  In 2000, Intel was confident that 10 GHZ chips were about a decade away.

 

This reliable increase in computing power corresponded with a reliable improvement in game graphics and design year on year.  You can usually look at a game from the 2000s and guess, to within a few years, when it came out because the graphical improvements were that consistent year after year.

 

The improvement was also rapid.  Compare 2004's Far Cry to 2007's Crysis.

 

y48dZUu.jpg

 

lhaifqA.jpg

 

And so, for a time, game designers and hardware designers marched hand in hand towards ever greater performance.

 

The End of the Low-Hanging Fruit

 

But you know how this works, right?  Everyone has seen VH1's Behind the Music.  This next part is where it all comes apart after the explosive success and drugs and groupies, leaving just the drugs.  This next part is where we are right now.

 

If you look again at the chart of CPU clock speeds, you see that improvement flatlines at about 2005.  This is due to the end of Dennard Scaling.  Until about 2006, reductions in the size of transistors allowed chip engineers to increase clock speeds without worrying about thermal issues, but that isn't the case anymore.  Transistors have become so small that significant amounts of current leakage occur, meaning that clock speeds cannot improve without imposing unrealistic thermal loads on the chips.

 

Clock speed isn't everything.  The actual muscle of a CPU is a function of several things; the pipeline, the instructions per clock cycle, clock speed, and, after 2005 with the introduction of the Athlon 64X2, the core count.  And, even as clock speed remained the same, these other important metrics did continue to see improvement:

DmSAJ7K.png

The catch is that the raw performance of a CPU is, roughly speaking, a multiplicative product of all of these things working together.  If the chip designers can manage a 20% increase in IPC and a 20% increase in clock speed, and some enhancements to pipeline design that amount to a 5% improvement, then they're looking at a 51.2% overall improvement in chip performance.  Roughly.  But if they stop being able to improve one of these factors, then to achieve the same increases in performance, they need to cram in the improvements into just the remaining areas, which is a lot harder than making modest improvements across the board.

 

Multi-core CPUs arrived to market at around the same time that clock speed increases became impossible.  Adding more cores to the CPU did initially allow for some multiplicative improvements in chip performance, which did buy time for the trend of ever-increasing performance.  The theoretical FLOPS (floating point operations per second) of a chip is a function of its IPC, core count and clock speed.  However, the real-world performance increase provided by multi-core processing is highly dependent on the degree to which the task can be paralleled, and is subject to Amdahl's Law:

1280px-AmdahlsLaw.svg.png

Most games can be only poorly parallelized.  The parallel portion is probably around the 50% mark for everything except graphics, which has can be parallelized excellently.  This means that as soon as CPUs hit 16 cores, there was basically no additional improvement to be had in games from multi-core technology.  That is, unless game designers start to code games specifically for better multi-core performance, but so far this has not happened.  On top of this, adding more cores to a CPU usually imposes a small reduction to clock speed, so the actual point of diminishing returns may occur at a slightly lower core count.

 

On top of all that, designing new and smaller chip architecture has become harder and harder.  Intel first announced 10nm chip architecture back in September 2017, and showed a timeline with it straddling 2017 and 2018.  2018 has come and gone, and still no 10nm.  Currently Intel is hopeful that they can get 10nm chips to market by the end of 2019.

 

AMD have had a somewhat easier time of it, announcing a radically different mixed 14nm and 7nm "chiplet" architecture at the end of 2018, and actually brought a 7nm discrete graphics card to market at the beginning of 2019.  However, this new graphics card merely matches NVIDIA's top-of-the-line cards, both in terms of performance and in terms of price.  This is a significant development, since AMD's graphics cards have usually been second-best, or cost-effective mid-range models at best, so for them to have a competitive top-of-the-line model is noteworthy.  But, while CPUs and GPUs are different, it certainly doesn't paint a picture of obvious and overwhelming superiority for the new 7nm process.  The release of AMD's "chiplet" Zen 2 CPUs appears to have been delayed to the middle of 2019, so I suppose we'll find out then.  Additionally, it appears that the next-generation of Playstation will use a version of AMD's upcoming "Navi" GPU, as well as a Zen CPU, and AMD hardware will power the next-generation XBOX as well. 

 

So AMD is doing quite well servicing the console gaming peasant crowd, at least.  Time will tell whether the unexpected delays faced by their rivals along with the unexpected boost from crypto miners buying literally every fucking GPU known to man will allow them to dominate the hardware market going forward.  Investors seem optimistic, however:

RGjj95Z.png?1

 

With Intel, they seem less sanguine:

fMQ1APx.png?1

and with NVIDIA, well...

 

k3ciOfc.png?1

 

But the bottom line is don't expect miracles.  While it would be enormously satisfying to see Intel and NVIDIA taken down a peg after years of anti-consumer bullshit, the reality is that hardware improvements have fundamentally become difficult.  For the time being, nobody is going to be throwing out their old computers just because they've gotten slow.  As the rate of improvements dwindles, people will start throwing out their old PCs and replacing them only because they've gotten broken.

 

OK, but What About GPUs?

 

GPU improvements took longer to slow down than CPU improvements, in large part because GPU workloads can be parallel processed well.  But the slowdown has arrived.

 

This hasn't stopped the manufacturers of discrete GPUs from trying to innovate, of course.  Not only that; the market is about to become more competitive with Intel announcing their plans for a discrete GPU in the near future.  NVIDIA has pushed their new ray-tracing optimized graphics cards for the past few months as well.  The cryptomining GPU boom has come and gone; GPUs turn out to be better than CPUs at cryptomining, but ASICs beat out GPUs but a lot, so that market is unlikely to be a factor again.  GPUs are still relatively cost-competitive for a variety of machine learning tasks, although long-term these will probably be displaced by custom designed chips like the ones Google is mass-ordering.

 

Things really do not look rosy for GPU sales.  Every time someone discovers some clever alternative use for GPUs like cryptomining or machine learning, they get displaced after a few years by custom hardware solutions even more fine-tuned to the task.  Highly parallel chips are the future, but there's no reason to think that those highly parallel chips will be traditional GPUs, per se.

And speaking of which, aren't CPUs getting more parallel, with their ever-increasing core count?  And doesn't AMD's "chiplet" architecture allow wildly differently optimized cores to be stitched together?  So, the CPU of a computer could very easily be made to accommodate capable on-board graphics muscle.  So... why do we even need GPUs in the future?  After all, PCs used to have discrete sound cards and networking cards, and the CPU does all of that now.  The GPU has really been the last hold-out, and will likely be swallowed by the CPU, at least on low and mid range machines in the next few years.

 

Where to Next?

 

At the end of 2018, popular YouTube tech channel LinusTechTips released a video about Shadow.  Shadow is a company that is planning to use centrally-located servers to provide cloud-based games streaming.  At the time, the video was received with (understandably) a lot of skepticism, and even Linus doesn't sound all that convinced by Shadow's claims.
 

 

The technical problems with such a system seem daunting, especially with respect to latency.  This really did seem like an idea that would come and go.  This is not its time; the technology simply isn't good enough.

And then, just ten days ago, Google announced that they had exactly the same idea:
 

 

The fact that tech colossus Google is interested changed a lot of people's minds about the idea of cloud gaming.  Is this the way forward?  I am unconvinced.  The latency problems do seem legitimately difficult to overcome, even for Google.  Also, almost everything that Google tries to do that isn't search on Android fails miserably.  Remember Google Glass?  Google Plus?

 

But I do think that games that are partially cloud-based will have some market share.  Actually, they already do.  I spent a hell of a lot of time playing World of Tanks, and that game calculates all line-of-sight checks and all gunfire server-side.  Most online games do have some things that are calculated server-side, but WoT was an extreme example for the time.  I could easily see future games offloading a greater amount of the computational load to centralized servers vis a vis the player's own PC.

 

But there are two far greater harbingers of doom for PC gaming than cloud computing.  The first is smart phones and the second is shitty American corporate culture.  Smart phones are set to saturate the world in a way desktop PCs never did.  American games publishers are currently more interested in the profits from gambling-esque game monetization schemes than they are in making games.  Obviously, I don't mean that in a generic anti-capitalist, corporation-bashing hippie way.  I hate hippies.  I fuck hippies for breakfast.  But if you look at even mainstream news outlets on Electronic Arts, it's pretty obvious that the AAA games industry, which had hitherto been part of the engine driving the games/hardware train forward, is badly sick right now.  The only thing that may stop their current sleaziness is government intervention.

 

So, that brings us to the least important, but most discussion-sparking part of the article; my predictions.  In the next few years, I predict that the most popular game titles will be things like Fortnite or Apex Legends.  They will be monetized on some sort of games-as-service model, and will lean heavily if not entirely on multiplayer modes.  They may incorporate some use of server-side calculation to offload the player PC, but in general they will work on modest PCs because they will only aspire to have decent, readable graphics rather than really pretty ones.  The typical "gaming rig" for this type of game will be a modest and inexpensive desktop or laptop running built-in graphics with no discrete graphics card.  There will continue to be an enthusiast market for games that push the limits, but this market will no longer drive the majority of gaming hardware sales.  If these predictions sound suspiciously similar to those espoused by the Coreteks tech channel, that's because I watched a hell of a lot of his stuff when researching this post, and I find his views generally convincing.

 

Intel's Foveros 3D chip architecture could bring a surge in CPU performance, but I predict that it will be a one-time surge, followed by the return to relatively slow improvement.  The reason why is that the Foveros architecture allows for truly massive CPU caches, and these could be used to create enormous IPC gains.  But after the initial boon caused by the change in architecture, the same problems that are currently slowing down improvement would be back, the same as before.  It definitely wouldn't be a return to the good old days of Moore's Law.  Even further down the road, a switch to a different semiconducting material such as Gallium Nitride (which is already used in some wireless devices and military electronics) could allow further miniaturization and speed ups where silicon has stalled out.  But those sort of predictions stretch my limited prescience and knowledge of semiconductor physics too far.

 

If you are interested in this stuff, I recommend diving into Coretek's channel (linked above) as well as Adored TV.

Link to comment
Share on other sites

OK, here is some supplementary material:
 

 

This video is clearly intended for a lay audience, which is good.  I really don't "get" this computer hardware stuff to the degree that I get some other technology.

 

One thing that this video clarifies that I had sort of vaguely understood is that the nanometer sizes in manufacturing jargon are just marketing.  They refer approximately to the size of the manufacturing node, but there's no real reason to believe that, say, Intel's 10nm stuff is actually 43% larger than AMD's 7nm.  They are probably broadly comparable.

 

 

This video from an AMD higher-up also gives an overview of the recent trends in chip technology, albeit slanted towards server-type hardware.  At 17:40 he discusses CPUs and GPUs baked into the same chip.

Link to comment
Share on other sites

A wonderful post! Thank you.

 

Infintesimally small point of order, though: given that cells are essentially little computers unto themselves (and eukaryotic cells are an order of magnitude more complex in this regard than, say, bacterial cells), the computing power of the pentium I-based PC I used to run DOOM (I came to it late, obviously) was almost certainly substantially less than a potato. Like, microscopically less. I'm not even sure how to rationally go about working out the relative computing power at play here (and I can't find anyone who has seriously tackled the problem to crib from), but it's certainly many orders of magnitude of difference.

 

So probably a better analogy would be 'a baked potato'?

Link to comment
Share on other sites

An op-ed from some years ago; reiterating that node naming is misleading, and doesn't really mean much.

 

New video from Coreteks, predicting that some time within the next two years there will be hardware-level automated conversion of single-threaded sequential code into more parallelizable code.  Apparently there are significant developments in this area lately:

 

 

Link to comment
Share on other sites

On 4/9/2019 at 12:43 PM, LoooSeR said:

   Apperently high quality assets can play a huge role in visuals. Vanila Unreal engine.

 

   Maybe we don't really NEED huge increases in computing power in short amount of time.

 

Yes and no, I think.  They are setting up these real-time, photorealistic scenes in situations where conventional rasterization with shaders looks very good.  There aren't any shiny objects, there's nothing transparent, and the lighting is fairly simple.

 

With that said, the ability of the new NVIDIA cards to do shiny reflections and raytracing is limited.  Most of their examples of how it makes a scene look better are very contrived.  Battlefield V with raytracing on looks not very different from Battlefield V without raytracing... because most of the graphics are conventionally rasterized!  Raytracing is still too computationally expensive to use much, so they only use it on certain lighting and reflection details.

 

The most convincing argument I have heard for full raytraced games is that it will make game development cheaper if it completely replaces rasterization.  The artists wouldn't need to do shader mapping (bump maps, etc); all that stuff would be built into the game engine.  But that sort of technology is still many years away, and I am not convinced that the cost of paying artists is a large factor in video game development costs and timelines as compared to, say, EA's incompetence and greed.

Link to comment
Share on other sites

  • 3 weeks later...

This video explains why, so far, integrated graphics have sucked:
 

 

and note how this explanation will not be true anymore once AMD starts producing their 7nm chiplet-based designs in earnest.

 

Coreteks has a new video talking about AMD's roadmap for the future:
 

 

(Note also his prediction of Amazon/Twitch announcing a streaming games service similar to Stadia.  We shall see!)

 

But something we should be keeping in mind is that Intel is getting into the GPU game soon, and that Intel is working on heterogeneous chiplet architecture as well.

 

So, most likely AMD will lead with the CPU+GPU glued together chips, but Intel will only be months/a couple of years behind them.

Link to comment
Share on other sites

Bad news for AMD's Navi GPUs:
 

 

 

Executive summary: The most recent information and rumors has some of AMD's next-gen Navi GPU lineup becoming available in Q3 of 2019.  However, the more muscular cards, the ones that are expected to (slightly) outpace NVIDIA's current best offerings are delayed until 2020.

 

Furthermore, the Navi cards themselves may be a bit of a disappointment.  Analysts are already pretty sure that Navi has encountered one major delay, which is why they were not announced at the January CES show.  On top of that, the cards are reportedly experiencing thermal and clock speed issues.

 

So, optimistically, the Navi lineup will be hot and loud for their performance, but will offer better performance to price than the current NVIDIA 12nm Turing cards.  Moreover, both the PS5 and next-generation XBOX will use some variant of the AMD Navi chip, so it is likely that the next generation of games (certainly console ports) will be optimized for AMD hardware.

 

However, with the delay of the high-end Navi cards until 2020, the time during which AMD will have the fastest GPUs on the market will be very short indeed.  The very best of the Navi GPUs are expected to best the very top of NVIDIA's current Turing lineup (again, most likely at the expense of running rather hot and requiring big, noisy fans), but by early-ish 2020 when they show up, whatever NVIDIA has planned for their next-generation GPUs will have to be very close to market as well.

 

I think the big takeaway is that the 10nm and 7nm nodes are very hard to design for.  Intel has had big delays, AMD has had delays, and NVIDIA and Samsung have probably had big delays that they've managed to keep more quiet.
 

 

Bottom line; the rate of improvement of gaming hardware is slowing down.  Compare the GTX 1080 TI vs the RTX 2080.  NVIDIA was able to make a card that was better than their previous best effort, but only just barely and at enormous cost.  Sure, part of the cost was NVIDIA's thick margins, but a large part of it was also that the development of new microlithography nodes is now yielding less juice per squeeze than it used to.

 

There's a good case to be made that the answer to "should I buy an RTX 2080 or a Radeon VII" is "see if you can get a deal on last generation's GTX 1080 TI; because those are about 90% as good."

Link to comment
Share on other sites

  • 1 month later...

AMD has officially announced their 7nm-based chiplet-based CPU lineup.  Their value-oriented 8-core CPUs were announced at Computex, along with a hulking 12 core part, while they delayed the announcement of the monstrous 16 core Ryzen 9 3950X until E3.

 

UtFxbug.png

 

(taken from here)

 

Leaked benchmarks of what are probably an engineering sample indicate that the 3950X is probably the most powerful mainstream desktop CPU in the world.

 

The new Navi GPUs were also shown at E3.  As expected, the larger die-size "big Navi" is still nowhere to be seen.  But the outlook on the introductory Navi cards is better than expected.  They will be available starting July; same time as the new CPUs.  Performance wise, they're not going to set the world on fire.  But they have some interesting upscaling and latency reduction technology, and they do compete reasonably well with Nvidia's second-tier offerings.  This video summarizes things well:
 

 

Link to comment
Share on other sites

  • 1 month later...
  • 1 year later...

2020 Update:


2019 and 2020 have proven to be turbulent years for computer hardware development.  Since I last posted in this thread, there have been several significant developments.

 

1)  AMD's Navi 2, Ryzen 3000 series 7nm CPUs came out in Summer of 2019, and matched Intel CPUs in all but a few, extremely single-threaded benchmarks:
 

 

The 3000 series sold more than anticipated, with significant price scalping occurring.

The AMD 5000 series GPUs, based on the new RDNA architecture came out as well.  They were... OK, I guess.  Several of the mid and low end models were good value, but the large, high-performing models simply failed to materialize (and were likely cancelled late in development).  Driver support was also rocky at first, and unlike the competing RTX-2000 series, the RX-5000 series has no hardware-level support for ray tracing.

Intel, meanwhile, hasn't been having such a great time.  Several additional security vulnerabilities have been disclosed, and their own 7nm node was postponed by at least six months.  This threw their shareholders into a rage, and they are now facing a class-action lawsuit.

Global supply chains for electronics have been disrupted not only by COVID-19, but also by the trade war between South Korea and Japan.  Taiwan's TSMC is the undisputed industry leader in lithography, and the Trump Administration would like them to build a plant in the USA.  As the global leader in lithography, TSMC has their pick of suitors and has no difficulty selling off fab capacity.  Currently, Qualcomm, Apple, AMD, Nvidia and others jockey for their limited supply of silicon.

In Q3 or Q4 the new video game consoles are expected out, presumably just in time for the holidays.  Both will feature AMD-designed APUs based off their Zen 2 CPU architecture and RDNA2 GPU architecture, with some degree of hardware-level raytracing support.  Innocenceii's youtube channel does an excellent job going into the technical specifics of the designs.  Alas, he is constantly under siege by an army of unwashed console fanboys who keep accusing him of shilling for one side or the other.

Nvidia's next-generation GPUs, all but confirmed to be called the 3000-series, are expected out somewhat sooner, possibly as early as September 2020.  Interestingly, the persistent rumor is that they will be made at Samsung, and not at TSMC.  Samsung has invested heavily in ASML Extreme Ultra Violent (EUV) technology, and while TSMC currently has the highest density, best yielding lithography technology, Samsung may contest that in the coming years.  It is likely that the future of lithography for microelectronic manufacture will be a contest between TSMC in Taiwan and Samsung in South Korea, with Intel trailing far, far behind.

Link to comment
Share on other sites

  • 3 weeks later...
On 8/21/2020 at 5:14 AM, Collimatrix said:

2020 Update:


2019 and 2020 have proven to be turbulent years for computer hardware development.  Since I last posted in this thread, there have been several significant developments.

 

1)  AMD's Navi 2, Ryzen 3000 series 7nm CPUs came out in Summer of 2019, and matched Intel CPUs in all but a few, extremely single-threaded benchmarks:
 

 

The 3000 series sold more than anticipated, with significant price scalping occurring.

The AMD 5000 series GPUs, based on the new RDNA architecture came out as well.  They were... OK, I guess.  Several of the mid and low end models were good value, but the large, high-performing models simply failed to materialize (and were likely cancelled late in development).  Driver support was also rocky at first, and unlike the competing RTX-2000 series, the RX-5000 series has no hardware-level support for ray tracing.

Intel, meanwhile, hasn't been having such a great time.  Several additional security vulnerabilities have been disclosed, and their own 7nm node was postponed by at least six months.  This threw their shareholders into a rage, and they are now facing a class-action lawsuit.

Global supply chains for electronics have been disrupted not only by COVID-19, but also by the trade war between South Korea and Japan.  Taiwan's TSMC is the undisputed industry leader in lithography, and the Trump Administration would like them to build a plant in the USA.  As the global leader in lithography, TSMC has their pick of suitors and has no difficulty selling off fab capacity.  Currently, Qualcomm, Apple, AMD, Nvidia and others jockey for their limited supply of silicon.

In Q3 or Q4 the new video game consoles are expected out, presumably just in time for the holidays.  Both will feature AMD-designed APUs based off their Zen 2 CPU architecture and RDNA2 GPU architecture, with some degree of hardware-level raytracing support.  Innocenceii's youtube channel does an excellent job going into the technical specifics of the designs.  Alas, he is constantly under siege by an army of unwashed console fanboys who keep accusing him of shilling for one side or the other.

Nvidia's next-generation GPUs, all but confirmed to be called the 3000-series, are expected out somewhat sooner, possibly as early as September 2020.  Interestingly, the persistent rumor is that they will be made at Samsung, and not at TSMC.  Samsung has invested heavily in ASML Extreme Ultra Violent (EUV) technology, and while TSMC currently has the highest density, best yielding lithography technology, Samsung may contest that in the coming years.  It is likely that the future of lithography for microelectronic manufacture will be a contest between TSMC in Taiwan and Samsung in South Korea, with Intel trailing far, far behind.

 

Atleast TSMC is buying it's majority of equipment for it's business in USA.

From what i've read on NVIDIA, it's moving business to Artificial Intelligence and such and their GPU will follow that industry: parallelism orientated GPU's for neural networks, etc. Here comes Cyberdyne and it's multi processor T-800, the dudes from the '90 knew it all along!

Link to comment
Share on other sites

  • 4 months later...
On 3/29/2019 at 9:52 PM, Collimatrix said:

After all, PCs used to have discrete sound cards and networking cards, and the CPU does all of that now.  The GPU has really been the last hold-out, and will likely be swallowed by the CPU, at least on low and mid range machines in the next few years

honestly i believe that CPUs will be swallowed by the motherboards in the future as CPU performance means less by the day to the average user & most good MOBOs already have a miniaturized processor

Link to comment
Share on other sites

  • 7 months later...
  • 3 months later...

   Heh, didn't knew that some companies now have dedicated CPUs for neural network AIs/programs. I wonder if PCs will get something similar, in form of additional chips in graphic cards. Unlikely that we will get something like separate "physics card" that were breefly a thing in 2006-2008 IIRC.

Link to comment
Share on other sites

  • 9 months later...



AMD just officially showed off their new Ryzen 7000 series CPUs.  They should be available by September 23.  How time flies!

Performance for these CPUs is on the high end of what leakers and analysts were estimating.  However, it remains to be seen exactly how these perform in games vs. the handpicked average shown off in this presentation.  We should be seeing actual benchmarks soon.  It also remains to be seen how this will compare with Intel's Raptor Lake 13th generation CPUs.  Rumors and leaks suggest that, at least in single-core performance, the offerings from the two respective companies will be very close in performance.  TSMC's 5nm node looks quite impressive here, as it apparently allows for blistering 5.7GHz max boost frequency.

In an interesting reversal, the AMD CPUs will require a new motherboard with the new AM5 socket.  Intel has confirmed that their upcoming 13th generation chips will use the same socket as the previous models, which will make upgrading easier for people with existing 12th generation Alder Lake systems.

The highest-performing AMD CPUs for this generation will be less expensive than had been anticipated.  This suggests that their production efficiency is very good, and also that they're gunning for more market share.  However, because these new CPUs will require a new motherboard and new DDR5 RAM, making a new system using them will still be expensive.

 


This rumor suggests, however, that AMD will decisively take the performance crown in Q1 of 2023.  Their vertically-stacked vcache will be back, and at least according to this source, it's considerably improved over the implementation in the Ryzen 7 5800X3D.  The CPU frequency penalty for the vcache is lower, and the performance boost is higher.  Again, the proof will be in the pudding.  I do also wonder what pricing and availability will be like.  The chiplet strategy has helped AMD's CPU production efficiency overall, but the more components they add to each chip the more vulnerable they are to various global supply chain disruptions that have been going on at least since the 2019 South Korea / Japan trade war, and which were accentuated by COVID-19 and the Ukraine War.

Link to comment
Share on other sites

  • 4 weeks later...

As expected, Nvidia announced their 4000 series GPUs at the 2022 GTC Keynote:
 



Not too many surprises here.  Improvements in rasterizing performace look solid while improvements in raytracing look downright impressive.  Of note are all the new features Nvidia keeps pushing with their GPUs.  Nvidia has a really gigantic R&D budget, so they are nearly always the leaders in new software/hardware tricks while others follow.

I'm really not digging the prices, although the power consumption for the base models looks slightly less insane than what a number of board partner leaks had suggested.  I guaran-fucking-tee you that the weaker of the two 4080 models was originally supposed to be the 4070.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...