Jump to content
Sturgeon's House
Sign in to follow this  
FaustianQ

Death of AMD

Recommended Posts

AMD's recently release of the 300 series cards indicates that AMD is a downward spiral that it's unlikely to recover from. Discuss the eventual 2016-17 issue of no effective competition even GPU side and the affect on technological progression.

 

The dystopia is real.

Share this post


Link to post
Share on other sites

The rather large fraction of their business that is in things such as the semi-custom business resulting in the console wins (who else will do decent x86, good graphics and cheap for you?) and the fact that their debt isn't due to mature for a while should get them another roll of the dice. If Fiji is technically good, they may very well have the fundamentals to put out a very good generation in what's likely to be one of the biggest generations of graphics cards in history.

Share this post


Link to post
Share on other sites

3xx has been released? Rebadged cards is not a death knell - using the savings on R&D to put them at a more aggressive price point than nvidia's new designs should result in a very competitive range that sells well, maxwell has been more expensive than comparable AMD cards since the 750ti and if the 390X and 390 remain anywhere near as cheap as they were during the stock clearance then they ought to fly off the shelves. It's understandable with 14nm production scheduled so soon that they're cautious about releasing an entirely new architecture - the defining competition is next year when everyone gets to shrink their dies.

 

As for CPU's, pray for zen to close the gap. Intel integrating eDRAM on package with their latest chips is concerning. Everyone's known that integrated GPU's need more bandwidth than RAM can supply for ages, and by doing this before AMD intel has stolen a lead and is now faster in CPU and IGP. Hopefully AMD can integrate something similar for bristol ridge - maybe we could even dream of 1GB of HBM for in-package cache! I've heard that summit ridge will not have IGP's, which isn't what I'd like to see - it would mean that their APU's are still relying on excavator cores. Hopefully we see a zen based APU soon after the first zen CPU's, because chasing the top end with CPU-only zen products has very still competition from intel.

Share this post


Link to post
Share on other sites

Not sure I understand. Explain like you're working at the local ARC branch and have no one else to talk to about it.

EDIT: Also, link or die.

 

Title and assertion are hyperbolic, however the 300 series rebrand, it's poor performance, and bad price point indicate that unless AMD beats Pascal to the market with Arctic Islands, it's unlikely they'll recover enough in the GPU market to successfully release Zen in required volume.

 

The rather large fraction of their business that is in things such as the semi-custom business resulting in the console wins (who else will do decent x86, good graphics and cheap for you?) and the fact that their debt isn't due to mature for a while should get them another roll of the dice. If Fiji is technically good, they may very well have the fundamentals to put out a very good generation in what's likely to be one of the biggest generations of graphics cards in history.

 

Fiji does not look to be technically good - all reports so far on the 300 series have proven true, so it's likely leaked performance is also true. Without an aggressive price point, AMD will not be able to sell Fury. Bulldozer was new tech, that didn't mean it sold or was good.

 

When will their debt mature, and in what amounts?

 

3xx has been released? Rebadged cards is not a death knell - using the savings on R&D to put them at a more aggressive price point than nvidia's new designs should result in a very competitive range that sells well, maxwell has been more expensive than comparable AMD cards since the 750ti and if the 390X and 390 remain anywhere near as cheap as they were during the stock clearance then they ought to fly off the shelves. It's understandable with 14nm production scheduled so soon that they're cautious about releasing an entirely new architecture - the defining competition is next year when everyone gets to shrink their dies.

 

As for CPU's, pray for zen to close the gap. Intel integrating eDRAM on package with their latest chips is concerning. Everyone's known that integrated GPU's need more bandwidth than RAM can supply for ages, and by doing this before AMD intel has stolen a lead and is now faster in CPU and IGP. Hopefully AMD can integrate something similar for bristol ridge - maybe we could even dream of 1GB of HBM for in-package cache! I've heard that summit ridge will not have IGP's, which isn't what I'd like to see - it would mean that their APU's are still relying on excavator cores. Hopefully we see a zen based APU soon after the first zen CPU's, because chasing the top end with CPU-only zen products has very still competition from intel.

 

It is when market share is already so low that one could be easily pushed out of the market entirely - Savage, SiS and Voodoo all disappeared due to this. Price point is basically the concern though - they're not competitively priced. No one is going to pay nearly 400$ for an 8GB 290X or 350$ for a 290, not when that same performance can be had for so much cheaper and cooler. Nvidia is managing to supply gamers with what they need now, no one needs 8GB now. AMD could have released a really competitive 380X Tonga XT to beat the snot out of a 960/960ti/965 but that seems MIA, which is worse because ~230-270$ is where most cards are bought and AMD has a gigantic gap there.

 

Bristol Ridge and AM4 are supposed to come really soon. AMD is basically trying to push the supporting elements for Zen out as quick as possible, which is basically why I hope AMD has Arctic Islands complete and ready to ship in 6 months, "so why care about the 300 series?". It's possible AM4 will see the last generation of Construction cores from AMD, both APU and FX, as a fully mature product on 28nm, along with dual DDR3/4 boards, just so there is plenty of volume.

 

Also, 1GB of HBM on an AMD APU would crush any Iris Pro.

Share this post


Link to post
Share on other sites

How much does a 970 normally sell for? WCCFtech is claiming an RRP for the 390 of 320 Vs 350 USD for a 970, and looking at this review with a stock clocked 970 Vs an overclocked 290 (which is running 10MHz short of the 390) the 290 is comfortably ahead. It looks like these cards slot in very neatly into the existing price:performance structure, which is not a good thing - I'd like to see keener pricing to blow nvidia out the water, which is what AMD needs to regain that market share.

Share this post


Link to post
Share on other sites

How much does a 970 normally sell for? WCCFtech is claiming an RRP for the 390 of 320 Vs 350 USD for a 970, and looking at this review with a stock clocked 970 Vs an overclocked 290 (which is running 10MHz short of the 390) the 290 is comfortably ahead. It looks like these cards slot in very neatly into the existing price:performance structure, which is not a good thing - I'd like to see keener pricing to blow nvidia out the water, which is what AMD needs to regain that market share.

 

Pricing actually seems to be worse, the R9 390 is supposed to be 349$, and theres plenty of 970s for 300-320$. Factor into the better perf/watt of the 970 and it's a nobrainer since it'll have wider system support. The 290 is competitive @ 250-270$, which the 390 would also have to be priced at.

 

No one is going to pick a 390 over a 970.

Share this post


Link to post
Share on other sites

AMD's recently release of the 300 series cards indicates that AMD is a downward spiral that it's unlikely to recover from. Discuss the eventual 2016-17 issue of no effective competition even GPU side and the affect on technological progression.

 

The dystopia is real.

 

The thing is I'm not entirely sure if there really will be a big enough market for multiple GPU makers within a few years in the first place.

 

AAA gaming is pretty much the main driver of GPU development, but that's also an industry beginning to show cracks. The resurgence of PC gaming for instance was in large part driven not by AAA titles like Call of Duty, but by indie and smaller developers who don't need the latest and greatest hardware to run. In the current ongoing Steam Sale for instance the only "AAA" title I'm really considering is Witcher 3; everything else is indies (e.g. Darkest Dungeon), old games (e.g. Commandos), or mid-range titles that are filling the niches ignored by AAA (e.g. Cities Skyline).

 

Which is why I still don't have a dedicated GPU - relying instead on the built-in GPU included with my AMD processor. It has crazy heating issues, but spending on a more heavy-duty heat sink to resolve this cost only $20 as opposed to spending $200 for a GPU that I would only really need for one game.

Share this post


Link to post
Share on other sites

I plan to but the PC is just borderline capable so I'm holding off upgrading the PC until I have multiple games that need it. Besides, I only recently started playing Witcher 2 (never got past Chapter 1 before) and managed to buy Witcher 1 dirt cheap.

Share this post


Link to post
Share on other sites

Huh, I just noticed the 390 isn't an OC'd 290 like I previously assumed - it's a full fat chip, unlike the 290. So 390's should perform like an OC'd 290X, which makes AMD's pricing look a lot better. Also it makes the 390X's pricing look a lot worse, that's a lot of money for a 40MHz clock speed boost.

Share this post


Link to post
Share on other sites

The thing is I'm not entirely sure if there really will be a big enough market for multiple GPU makers within a few years in the first place.

 

AAA gaming is pretty much the main driver of GPU development, but that's also an industry beginning to show cracks. The resurgence of PC gaming for instance was in large part driven not by AAA titles like Call of Duty, but by indie and smaller developers who don't need the latest and greatest hardware to run. In the current ongoing Steam Sale for instance the only "AAA" title I'm really considering is Witcher 3; everything else is indies (e.g. Darkest Dungeon), old games (e.g. Commandos), or mid-range titles that are filling the niches ignored by AAA (e.g. Cities Skyline).

 

Which is why I still don't have a dedicated GPU - relying instead on the built-in GPU included with my AMD processor. It has crazy heating issues, but spending on a more heavy-duty heat sink to resolve this cost only $20 as opposed to spending $200 for a GPU that I would only really need for one game.

 

There is a theory that AMD is pivoting to do this - with 2-4GB HBM backed by DDR4, the thought is that APUs will replace low end and possibly mid tier GPUs, and AMD is hoping to get temps under control for the mobile market, while having a blatantly superior ARM+GPU design (K12) or even a power gated Zen SoC. Strong server chips will filter down to desktop, and focusing exclusively on highmid and better GPUs for supercomputing, and letting that filter down to desktop.

 

If Zen/K12 can be successes, and their GPU share exceeds that of Nvidia in "notdesktop" and isn't abysmal in desktop than in 2 years time we may instead ask "how can Nvidia survive?"- keeping in mind Nvidia is basically locked out of the CPU market and based on the Denver and Tegra SoCs they're shit compared to Samsung, Apple, Qualcomm and Intel. If AMD jumps on board, where the hell is Nvidia going to go?

Share this post


Link to post
Share on other sites

TBF with strong IGP's in APU's of both flavours, there's little point in staying in the low end GPU market. When's the last time anyone released a new GPU?

 

By "anyone" who are you referring to? Apple, Qualcomm, AMD, Intel and Nvidia have all released new GPUs within the past year.

Share this post


Link to post
Share on other sites

Sorry, I didn't finish before I hit post - I meant low end discrete GPU's. Those are what IGP's on every CPU are pushing out the market, and there's been orders of magnitude less interest in them than in the top-end halo products

Share this post


Link to post
Share on other sites

Yeah, for the most part sales for desktop GPUs center around mid to high end, tapering off at low and halo tier (BellCurve.jpg). However OEMs tend to buy the low end stuff for office work, but even that is and will die out if Zen is a success - desktops will come standard with a competitive low tier GPU out of the box, reducing cost.

 

Due this it's unlikely there will be any new discrete GPU maker - even being generous, PowerVR would still get trounced by anything AMD or Nvidia could offer, and the new Iris Pro GPUs would too if put on a discrete PCB with GDDR5 (the Iris Pro 6200 can run BF3 @ 40fps on high, that's with only 128mb eDRAM, 2GB of GDDR5 would mean Iris Pro can run most games on high @ 60fps, even modern).

 

As ARM develops though and the market shifts away from dependence on x86, it's possible we will see new players crop up in the CPU/GPU space just because there are no restrictions on GPUs and ARM is relatively free (1 million is still free compared to Intel's "No, fuck off and die").

Share this post


Link to post
Share on other sites

TBF with strong IGP's in APU's of both flavours, there's little point in staying in the low end GPU market. When's the last time anyone released a new GPU?

 

GTX 750 Ti is the lowest end video card that isn't basically a breakout box from a PCIe slot to video outputs.

 

This is simultaneously saying that a basically modern architecture has a low end GPU and showing just how high it is before it's worth making a chip not part of an SoC.

 

On the other hand NV are being lying liars who lie and making their naming scheme represent their chips as bigger compared to how they used to be so that's more an x40 part. The chip in a x70 and x80 is now in the Titan and x80 Ti.

 

 

There is a theory that AMD is pivoting to do this - with 2-4GB HBM backed by DDR4, the thought is that APUs will replace low end and possibly mid tier GPUs, and AMD is hoping to get temps under control for the mobile market, while having a blatantly superior ARM+GPU design (K12) or even a power gated Zen SoC. Strong server chips will filter down to desktop, and focusing exclusively on highmid and better GPUs for supercomputing, and letting that filter down to desktop.

 

If Zen/K12 can be successes, and their GPU share exceeds that of Nvidia in "notdesktop" and isn't abysmal in desktop than in 2 years time we may instead ask "how can Nvidia survive?"- keeping in mind Nvidia is basically locked out of the CPU market and based on the Denver and Tegra SoCs they're shit compared to Samsung, Apple, Qualcomm and Intel. If AMD jumps on board, where the hell is Nvidia going to go?

 

I think this is very likely. After all why not follow the strategy for convergence that they bought ATI for now that it seems to be the time when the market and technology are ready. A full SoC like Zen+Radeon+HBM is something they can do better than anyone else, with HBM being their thing, and Intel being okayish at graphics and NV being godawful at CPUs before we get into a lack of x86. And imagine what you could build with that. If I could go custom, that and a 280mm AIO cooler could probably fit into a case with room for a PSU over the board. Fitting them all into something like a Phanteks Enthoo EVOLVE ITX would make a heck of a small system.

Share this post


Link to post
Share on other sites

Yeah, for the most part sales for desktop GPUs center around mid to high end, tapering off at low and halo tier (BellCurve.jpg). However OEMs tend to buy the low end stuff for office work, but even that is and will die out if Zen is a success - desktops will come standard with a competitive low tier GPU out of the box, reducing cost.

 

Due this it's unlikely there will be any new discrete GPU maker - even being generous, PowerVR would still get trounced by anything AMD or Nvidia could offer, and the new Iris Pro GPUs would too if put on a discrete PCB with GDDR5 (the Iris Pro 6200 can run BF3 @ 40fps on high, that's with only 128mb eDRAM, 2GB of GDDR5 would mean Iris Pro can run most games on high @ 60fps, even modern).

 

As ARM develops though and the market shifts away from dependence on x86, it's possible we will see new players crop up in the CPU/GPU space just because there are no restrictions on GPUs and ARM is relatively free (1 million is still free compared to Intel's "No, fuck off and die").

 

oh god why does dell put this in PCs, somebody please kill it

 

GTX 750 Ti is the lowest end video card that isn't basically a breakout box from a PCIe slot to video outputs.

 

This is simultaneously saying that a basically modern architecture has a low end GPU and showing just how high it is before it's worth making a chip not part of an SoC.

 

On the other hand NV are being lying liars who lie and making their naming scheme represent their chips as bigger compared to how they used to be so that's more an x40 part. The chip in a x70 and x80 is now in the Titan and x80 Ti.

 

 

 

I think this is very likely. After all why not follow the strategy for convergence that they bought ATI for now that it seems to be the time when the market and technology are ready. A full SoC like Zen+Radeon+HBM is something they can do better than anyone else, with HBM being their thing, and Intel being okayish at graphics and NV being godawful at CPUs before we get into a lack of x86. And imagine what you could build with that. If I could go custom, that and a 280mm AIO cooler could probably fit into a case with room for a PSU over the board. Fitting them all into something like a Phanteks Enthoo EVOLVE ITX would make a heck of a small system.

 

I'm going to disagree with you about the 750ti being the smallest card worth buying - even a GT640 will score well above an AMD IGP in firestrike, and double what an intel IGP stutters out (ignoring the very latest intel stuff, because 14nm is hax), while anything above an R7 250 will improve an AMD IGP.

 

I'd love to see 4 zen cores and a huge IGP (so it's the same die size as a current 28nm construction APU) sat on an M-ITX motherboard with a large passive cooler (so 60-90W CPU TDP) - that'd be a pretty cute system, and very powerful.

Share this post


Link to post
Share on other sites

I'm going to disagree with you about the 750ti being the smallest card worth buying - even a GT640 will score well above an AMD IGP in firestrike, and double what an intel IGP stutters out (ignoring the very latest intel stuff, because 14nm is hax), while anything above an R7 250 will improve an AMD IGP.

 

I'd love to see 4 zen cores and a huge IGP (so it's the same die size as a current 28nm construction APU) sat on an M-ITX motherboard with a large passive cooler (so 60-90W CPU TDP) - that'd be a pretty cute system, and very powerful.

 

 

Worth making, not worth buying, but fair point. I don't think there's really that much room for improvement in such cards because they get bought for reasons other than gaming usually.

 

Yes that sounds fun, but I wonder what you could do with two packages on the same interposer and going relatively big core on both for the smallest gaming computer.

Share this post


Link to post
Share on other sites

Worth making, not worth buying, but fair point. I don't think there's really that much room for improvement in such cards because they get bought for reasons other than gaming usually.

 

Yes that sounds fun, but I wonder what you could do with two packages on the same interposer and going relatively big core on both for the smallest gaming computer.

 

I keep thinking about this and the surface area server sized CPUs have would allow some pretty impressive IGP performance.

Share this post


Link to post
Share on other sites

I think this is very likely. After all why not follow the strategy for convergence that they bought ATI for now that it seems to be the time when the market and technology are ready. A full SoC like Zen+Radeon+HBM is something they can do better than anyone else, with HBM being their thing, and Intel being okayish at graphics and NV being godawful at CPUs before we get into a lack of x86. And imagine what you could build with that. If I could go custom, that and a 280mm AIO cooler could probably fit into a case with room for a PSU over the board. Fitting them all into something like a Phanteks Enthoo EVOLVE ITX would make a heck of a small system.

 

Uuh, I think AMD was listening

https://www.techpowerup.com/213512/amd-announces-project-quantum.html

Weird looking thing, I think the top section is a radiator. I wonder what CPU it has? I can't see it selling well, but I want it to sell well - if I was in the market for a new high end rig I'd like to buy it

Share this post


Link to post
Share on other sites

Uuh, I think AMD was listening

https://www.techpowerup.com/213512/amd-announces-project-quantum.html

Weird looking thing, I think the top section is a radiator. I wonder what CPU it has? I can't see it selling well, but I want it to sell well - if I was in the market for a new high end rig I'd like to buy it

 

It's an intel of some variety. 4790k seems a likely bet.

 

 

AMD is making the claim that Fury X will not only OC well, but beats the Titan X in performance for 980ti price.

 

WTB if true.

 

If it does that, I'll take a GPU that likely will depreciate $300 by the next gen.

Share this post


Link to post
Share on other sites

It's an intel of some variety. 4790k seems a likely bet.

 

The development mule has a LGA1150 processor, but the release system is bound to be AMD to the core - the AMD branded RAM and SSD's finally make sense

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

  • Similar Content

    • By Collimatrix
      What a Long, Strange Trip it's Been
       
      PC gaming has been a hell of a ride.  I say "has been" both in the sense that exciting and dynamic things have happened, but also in the sense that the most exciting and dynamic times are behind us.  No other form of video gaming is as closely tied to the latest developments in personal computing hardware, and current trends do not suggest that anything dramatically new and exciting is immediately around the corner.  Indeed, fundamental aspects of semiconductor physics suggest that chip technology is nearing, or perhaps already on a plateau where only slow, incremental improvement is possible.  This, in turn, will limit the amount of improvement possible for game developers.  Gaming certainly will not disappear, and PC gaming will also not disappear, although the PC gaming share of the market may contract in the future.  But I think it is a reasonable expectation that future PC game titles will not be such dramatic technological improvements over older titles as was the case in the past in the near term.  In the long term, current technology and hardware design will eventually be replaced with something entirely different and disruptive, but as always it is difficult, maybe impossible to predict what that replacement will be.
       
      The Good Old Days
       
      The start of the modern, hardware-driven PC gaming culture that we all know and love began with Id Software's early first person shooter titles, most importantly 1993's Doom.
       
      PC gaming was around before Doom, of course, but Doom's combination of cutting edge graphics technology and massive, massive appeal is what really got the ball rolling.

      Doom was phenomenally popular.  There were, at one point, more installs of Doom than there were installs of the Windows operating system.  I don't think there is any subsequent PC title that can claim that.  Furthermore, it was Doom, and its spiritual successor Quake that really defined PC gaming as a genre that pushed the boundaries of what was possible with hardware.
       
      Doom convincingly faked 3D graphics on computers that had approximately the same number-crunching might as a potato.  It also demanded radically more computing power than Wolfenstein 3D, but in those days computing hardware was advancing at such a rate that this wasn't really unreasonable.  This was followed by Quake, which was actually 3D, and demanded so much more of the hardware then available that it quickly became one of the first games to support hardware acceleration.
       
      Id software disintegrated under the stress of the development of Quake, and while many of the original Id team have gone on to do noteworthy things in PC gaming technology, none of it has been earth-shaking the way their work at Id was.  And so, the next important development occurred not with Id's games, but with their successors.
       
      It had become clear, by that point, that there was a strong consumer demand for higher game framerates, but also for better-looking graphics.  In addition to ever-more sophisticated game engines and higher poly-count game models, the next big advance in PC gaming technology was the addition of shaders to the graphics.
       
      Shaders could be used to smooth out the low-poly models of the time, apply lighting effects, and generally make the games look less like spiky ass.  But the important caveat about shaders, from a hardware development perspective, was that shader code ran extremely well in parallel while the rest of the game code ran well in series.  The sort of chip that would quickly do the calculations for the main game, and the sort of chip that would do quickly do calculations for the graphics were therefore very different.  Companies devoted exclusively to making graphics-crunching chips emerged (of these, only Nvidia is left standing), and the stage was set for the heyday of PC gaming hardware evolution from the mid 1990s to the early 2000s.  Initially, there were a great number of hardware acceleration options, and getting everything to work was a bit of an inconsistent mess that only enthusiasts really bothered with, but things rapidly settled down to where we are today.  The important rules of thumb which have, hitherto applied are:

      -The IBM-compatible personal computer is the chosen mount of the Glorious PC Gaming Master Race™. 
      -The two most important pieces of hardware on a gaming PC are the CPU and the GPU, and every year the top of the line CPUs and GPUs will be a little faster than before.
      -Even though, as of the mid 2000s, both gaming consoles and Macs were made of predominantly IBM-compatible hardware, they are not suitable devices for the Glorious PC Gaming Master Race™.  This is because they have artificially-imposed software restrictions that keep them from easily being used the same way as a proper gaming PC.
      -Even though they did not suffer from the same compatibility issues as consoles or Macs, computers with integrated graphics processors are not suitable devices for the Glorious PC Gaming Master Race™.
      -Intel CPUs are the best, and Nvidia GPUs are the best.  AMD is a budget option in both categories.
       
      The Victorious March of Moore's Law
       
      Moore's Law, which is not an actual physical law, but rather an observation about the shrinkage of the physical size of transistors, has held so true for most of the 21st century that it seemed like it was an actual fundamental law of the universe.
       
      The most visible and obvious indication of the continuous improvement in computer hardware was that every year the clock speeds on CPUs got higher.
       

       
      Now, clock speed itself isn't actually particularly indicative of overall CPU performance, since that is a complex interplay of clock speed, instructions per cycle and pipe length.  But at the time, CPU architecture was staying more or less the same, so the increase in CPU clock speeds was a reasonable enough, and very marketing-friendly indicator of how swimmingly things were going.  In 2000, Intel was confident that 10 GHZ chips were about a decade away.
       
      This reliable increase in computing power corresponded with a reliable improvement in game graphics and design year on year.  You can usually look at a game from the 2000s and guess, to within a few years, when it came out because the graphical improvements were that consistent year after year.
       
      The improvement was also rapid.  Compare 2004's Far Cry to 2007's Crysis.
       

       

       
      And so, for a time, game designers and hardware designers marched hand in hand towards ever greater performance.
       
      The End of the Low-Hanging Fruit
       
      But you know how this works, right?  Everyone has seen VH1's Behind the Music.  This next part is where it all comes apart after the explosive success and drugs and groupies, leaving just the drugs.  This next part is where we are right now.
       
      If you look again at the chart of CPU clock speeds, you see that improvement flatlines at about 2005.  This is due to the end of Dennard Scaling.  Until about 2006, reductions in the size of transistors allowed chip engineers to increase clock speeds without worrying about thermal issues, but that isn't the case anymore.  Transistors have become so small that significant amounts of current leakage occur, meaning that clock speeds cannot improve without imposing unrealistic thermal loads on the chips.
       
      Clock speed isn't everything.  The actual muscle of a CPU is a function of several things; the pipeline, the instructions per clock cycle, clock speed, and, after 2005 with the introduction of the Athlon 64X2, the core count.  And, even as clock speed remained the same, these other important metrics did continue to see improvement:



      The catch is that the raw performance of a CPU is, roughly speaking, a multiplicative product of all of these things working together.  If the chip designers can manage a 20% increase in IPC and a 20% increase in clock speed, and some enhancements to pipeline design that amount to a 5% improvement, then they're looking at a 51.2% overall improvement in chip performance.  Roughly.  But if they stop being able to improve one of these factors, then to achieve the same increases in performance, they need to cram in the improvements into just the remaining areas, which is a lot harder than making modest improvements across the board.
       
      Multi-core CPUs arrived to market at around the same time that clock speed increases became impossible.  Adding more cores to the CPU did initially allow for some multiplicative improvements in chip performance, which did buy time for the trend of ever-increasing performance.  The theoretical FLOPS (floating point operations per second) of a chip is a function of its IPC, core count and clock speed.  However, the real-world performance increase provided by multi-core processing is highly dependent on the degree to which the task can be paralleled, and is subject to Amdahl's Law:


      Most games can be only poorly parallelized.  The parallel portion is probably around the 50% mark for everything except graphics, which has can be parallelized excellently.  This means that as soon as CPUs hit 16 cores, there was basically no additional improvement to be had in games from multi-core technology.  That is, unless game designers start to code games specifically for better multi-core performance, but so far this has not happened.  On top of this, adding more cores to a CPU usually imposes a small reduction to clock speed, so the actual point of diminishing returns may occur at a slightly lower core count.
       
      On top of all that, designing new and smaller chip architecture has become harder and harder.  Intel first announced 10nm chip architecture back in September 2017, and showed a timeline with it straddling 2017 and 2018.  2018 has come and gone, and still no 10nm.  Currently Intel is hopeful that they can get 10nm chips to market by the end of 2019.
       
      AMD have had a somewhat easier time of it, announcing a radically different mixed 14nm and 7nm "chiplet" architecture at the end of 2018, and actually brought a 7nm discrete graphics card to market at the beginning of 2019.  However, this new graphics card merely matches NVIDIA's top-of-the-line cards, both in terms of performance and in terms of price.  This is a significant development, since AMD's graphics cards have usually been second-best, or cost-effective mid-range models at best, so for them to have a competitive top-of-the-line model is noteworthy.  But, while CPUs and GPUs are different, it certainly doesn't paint a picture of obvious and overwhelming superiority for the new 7nm process.  The release of AMD's "chiplet" Zen 2 CPUs appears to have been delayed to the middle of 2019, so I suppose we'll find out then.  Additionally, it appears that the next-generation of Playstation will use a version of AMD's upcoming "Navi" GPU, as well as a Zen CPU, and AMD hardware will power the next-generation XBOX as well. 
       
      So AMD is doing quite well servicing the console gaming peasant crowd, at least.  Time will tell whether the unexpected delays faced by their rivals along with the unexpected boost from crypto miners buying literally every fucking GPU known to man will allow them to dominate the hardware market going forward.  Investors seem optimistic, however:


       
      With Intel, they seem less sanguine:



      and with NVIDIA, well...
       

       
      But the bottom line is don't expect miracles.  While it would be enormously satisfying to see Intel and NVIDIA taken down a peg after years of anti-consumer bullshit, the reality is that hardware improvements have fundamentally become difficult.  For the time being, nobody is going to be throwing out their old computers just because they've gotten slow.  As the rate of improvements dwindles, people will start throwing out their old PCs and replacing them only because they've gotten broken.
       
      OK, but What About GPUs?
       
      GPU improvements took longer to slow down than CPU improvements, in large part because GPU workloads can be parallel processed well.  But the slowdown has arrived.
       
      This hasn't stopped the manufacturers of discrete GPUs from trying to innovate, of course.  Not only that; the market is about to become more competitive with Intel announcing their plans for a discrete GPU in the near future.  NVIDIA has pushed their new ray-tracing optimized graphics cards for the past few months as well.  The cryptomining GPU boom has come and gone; GPUs turn out to be better than CPUs at cryptomining, but ASICs beat out GPUs but a lot, so that market is unlikely to be a factor again.  GPUs are still relatively cost-competitive for a variety of machine learning tasks, although long-term these will probably be displaced by custom designed chips like the ones Google is mass-ordering.
       
      Things really do not look rosy for GPU sales.  Every time someone discovers some clever alternative use for GPUs like cryptomining or machine learning, they get displaced after a few years by custom hardware solutions even more fine-tuned to the task.  Highly parallel chips are the future, but there's no reason to think that those highly parallel chips will be traditional GPUs, per se.

      And speaking of which, aren't CPUs getting more parallel, with their ever-increasing core count?  And doesn't AMD's "chiplet" architecture allow wildly differently optimized cores to be stitched together?  So, the CPU of a computer could very easily be made to accommodate capable on-board graphics muscle.  So... why do we even need GPUs in the future?  After all, PCs used to have discrete sound cards and networking cards, and the CPU does all of that now.  The GPU has really been the last hold-out, and will likely be swallowed by the CPU, at least on low and mid range machines in the next few years.
       
      Where to Next?
       
      At the end of 2018, popular YouTube tech channel LinusTechTips released a video about Shadow.  Shadow is a company that is planning to use centrally-located servers to provide cloud-based games streaming.  At the time, the video was received with (understandably) a lot of skepticism, and even Linus doesn't sound all that convinced by Shadow's claims.
       
       
      The technical problems with such a system seem daunting, especially with respect to latency.  This really did seem like an idea that would come and go.  This is not its time; the technology simply isn't good enough.

      And then, just ten days ago, Google announced that they had exactly the same idea:
       
       
      The fact that tech colossus Google is interested changed a lot of people's minds about the idea of cloud gaming.  Is this the way forward?  I am unconvinced.  The latency problems do seem legitimately difficult to overcome, even for Google.  Also, almost everything that Google tries to do that isn't search on Android fails miserably.  Remember Google Glass?  Google Plus?
       
      But I do think that games that are partially cloud-based will have some market share.  Actually, they already do.  I spent a hell of a lot of time playing World of Tanks, and that game calculates all line-of-sight checks and all gunfire server-side.  Most online games do have some things that are calculated server-side, but WoT was an extreme example for the time.  I could easily see future games offloading a greater amount of the computational load to centralized servers vis a vis the player's own PC.
       
      But there are two far greater harbingers of doom for PC gaming than cloud computing.  The first is smart phones and the second is shitty American corporate culture.  Smart phones are set to saturate the world in a way desktop PCs never did.  American games publishers are currently more interested in the profits from gambling-esque game monetization schemes than they are in making games.  Obviously, I don't mean that in a generic anti-capitalist, corporation-bashing hippie way.  I hate hippies.  I fuck hippies for breakfast.  But if you look at even mainstream news outlets on Electronic Arts, it's pretty obvious that the AAA games industry, which had hitherto been part of the engine driving the games/hardware train forward, is badly sick right now.  The only thing that may stop their current sleaziness is government intervention.
       
      So, that brings us to the least important, but most discussion-sparking part of the article; my predictions.  In the next few years, I predict that the most popular game titles will be things like Fortnite or Apex Legends.  They will be monetized on some sort of games-as-service model, and will lean heavily if not entirely on multiplayer modes.  They may incorporate some use of server-side calculation to offload the player PC, but in general they will work on modest PCs because they will only aspire to have decent, readable graphics rather than really pretty ones.  The typical "gaming rig" for this type of game will be a modest and inexpensive desktop or laptop running built-in graphics with no discrete graphics card.  There will continue to be an enthusiast market for games that push the limits, but this market will no longer drive the majority of gaming hardware sales.  If these predictions sound suspiciously similar to those espoused by the Coreteks tech channel, that's because I watched a hell of a lot of his stuff when researching this post, and I find his views generally convincing.
       
      Intel's Foveros 3D chip architecture could bring a surge in CPU performance, but I predict that it will be a one-time surge, followed by the return to relatively slow improvement.  The reason why is that the Foveros architecture allows for truly massive CPU caches, and these could be used to create enormous IPC gains.  But after the initial boon caused by the change in architecture, the same problems that are currently slowing down improvement would be back, the same as before.  It definitely wouldn't be a return to the good old days of Moore's Law.  Even further down the road, a switch to a different semiconducting material such as Gallium Nitride (which is already used in some wireless devices and military electronics) could allow further miniaturization and speed ups where silicon has stalled out.  But those sort of predictions stretch my limited prescience and knowledge of semiconductor physics too far.
       
      If you are interested in this stuff, I recommend diving into Coretek's channel (linked above) as well as Adored TV.
    • By SergeantMatt
      tl;dr, does not appear to be great for gaming.
      http://www.gamersnexus.net/hwreviews/2822-amd-ryzen-r7-1800x-review-premiere-blender-fps-benchmarks
      Removing GPU bottlenecking by running at low resolutions, the 1800X gets beaten by the 7700K in most games, which is half the price.


×
×
  • Create New...