I expect them to merge enthusiast into the pro segment: It doesn’t make sense for them to have large RDNA cards because there’s too few customers just as it doesn’t make sense for them to make small CDNA cards but in the future there’s only going to be UDNA and the high end of gaming and the low end of professional will overlap.
I very much doubt they’re going to do compute-only cards as then you’re losing sales to people wanting a (maybe overly beefy) CAD or Blender or whatever workstation, just to save on some DP connectors. Segmenting the market only makes sense when you’re a (quasi-) monopolist and want to abuse that situation, that is, if you’re nvidia.
True, in simple words, AMD is moving towards versatile solutions that is going to satisfy corporate clients and ordinary clients while producing same thing, their apu and xdna architecture is example, apu is used in playstation and Xbox, xdna and epyc used in datacenters, and AMD is uniting btb and btc merchandise for manufacture simplification
I wonder, what is easier: Convincing data centre operators to not worry about the power draw and airflow impact of those LEDs on the fans, or convincing gamers that LEDs don’t make things faster?
Maybe a bold strategy is in order: Buy cooling assemblies exclusively from Noctua, and exclusively in beige/brown.
There’s no non-reference designs of Radeon PROs, I think. Instincts, even less. If the ranges bleed into each other they might actually sell reference designs down into the gamer mid-range but I admit that I’m hand-waving. But if, as a very enthusiastic enthusiast, you’re buying something above the intended high-end gaming point and well into the pro region it’s probably going to be a reference design.
And as a side note finally they’re selling CPUs boxed but without fan.
True you’ve got a point, but i don’t think there’s gonna be many reference designs simply because AMD is cutting expenses as much as possible by selling fabs in the past, simplifying merchandise lineup now, and i guess they’ll outsource as much as possible to non-reference manufacturers as a part of existence cutting measures, they also include outsourcing manufacture to tsmc in the past and opensourcing most of their software stack so community would step in to maintain
Data centers don’t give a shit if your GPU has LEDs. Compared to the rest of the server, the power draw is nearly insignificant. And servers push enough air that imperfect flow doesn’t matter.
They honestly seem to be done with high-end “enthusiast” GPUs. There is probably more money/potential for iGPUs and low/middle level products optimized for laptops.
Their last few generations of flagship GPUs have been pretty underwhelming but at least they existed. I’d been hoping for a while that they’d actually come up with something to give Nvidia’s xx80 Ti/xx90 a run for their money. I wasn’t really interested in switching teams just to be capped at the equivalent performance of a xx70 for $100-200 more.
They briefly beat Nvidia until Nvidia came out with the 3090 Ti. Even then, it was so close you couldn’t tell them apart with the naked eye.
Both the 6000 and 7000 series have had cards that compete with the 80-class cards, too.
The reality is that people just buy Nvidia no matter what. Even the disastrous GTX 480 outsold ATI/AMD’s cards in most markets.
The $500 R9 290X was faster than the $1000 Titan, with the R9 290 being just 5% slower and $400, and yet AMD lost a huge amount of money on it.
AMD has literally made cards faster than Nvidia’s for half the price and lost money on them.
It’s simply not viable for AMD to spend a fortune creating a top-tier GPU only to have it not sell well because Nvidia’s mindshare is arguably even better than Apple’s.
Nvidia’s market share is over 80%. And it’s not because their cards are the rational choice at the price points most consumers are buying at. It really cannot be stressed enough how much of a marketing win Nvidia is.
Yup, it’s the classic name-brand tax. That, and Nvidia also wins on features, like RTX and AI/compute.
But most people don’t actually use those features, so most people seem to be buying Nvidia due to brand recognition. AMD has dethroned Intel on performance and price, yet somehow Intel remains dominant on consumer PCs, though the lead is a lot smaller than before.
If AMD wants to take over Nvidia, they’ll need consistently faster GPUs and lower prices with no compromises on features. They’d have to invest a ton to get there, and even then Nvidia would probably sell better than AMD on name recognition alone. Screw that! It makes far more sense for them to stay competitive and suck up a bunch of the mid-range market and transition the low-end market to APUs. Intel can play at the low-mid range markets, and AMD will slot themselves as a bit better than Intel, and a better value than Nvidia.
That said, I think AMD needs to go harder on the datacenter for compute, because that’s where the real money is, and it’s all going to Nvidia. If they can leverage their processors to provide a better overall solution for datacenter compute, they could translate that into prosumer compute devices. High end gaming is cool, but it’s not nearly as lucrative as datacenter. I would hesitate to make AI-specific chips, but instead make high quality general compute chips so they can take advantage of whatever comes after the current wave of AI.
I think AMD should also get back into ARM and low-power devices. The snapdragon laptops have made a big splash, and that market could explode once the software is refined, and AMD should be poised to dominate it. They already have ARM products, they just need to make low-power, high performance products for the laptop market.
I think AMD should also get back into ARM and low-power devices. The snapdragon laptops have made a big splash, and that market could explode once the software is refined, and AMD should be poised to dominate it. They already have ARM products, they just need to make low-power, high performance products for the laptop market.
They don’t need to go with ARM. There’s nothing inherently wrong with the x86 instruction set that prevents them from making low power processors, it’s just that it doesn’t make sense for them to build an architecture for that market since the margins for servers are much higher. Even then, the Z1 Extreme got pretty close to Apple’s M2 processors.
Lunar Lake has also shown that x86 can match or beat Qualcomm’s ARM chips while maintaining full compatibility with all x86 applications.
it’s just that it doesn’t make sense for them to build an architecture for that market since the margins for servers are much higher
Hence ARM. ARM already has designs for low power, high performance chips for smaller devices like laptops. Intel is chasing that market, and AMD could easily get a foot in the door by slapping their label on some designs, perhaps with a few tweaks (might be cool to integrate their graphics cores?). They already have ARM cores for datacenter workloads, so it probably wouldn’t be too crazy to try it out on business laptops.
The 6000 series from AMD were so great because they picked the correct process node. Nvidia went with the far inferior Samsung 8nm node over TSMCs 7. Yet Nvidia still kept up with AMD in most areas (ignoring ray tracing).
Even the disastrous GTX 480 outsold ATI/AMD’s cards in most markets.
The “disastrous” Fermi cards were also compute monsters. Even after the 600 series came out people were buying the 500 series over them because they performed so much better for the money. Instead of picking up a Kepler Quadra card in order to get double precision you could get a regular ass GTX 580 and do the same thing.
Yes, they were, and that highlights the problem really. Nvidia’s grip on mind share is so strong that AMD releasing cards that matched or exceeded at the top end didn’t actually matter and you still have people saying things like the comment you responded to.
It’s actually incredible how quickly the discourse shifted from ray tracing being a performance hogging gimmick and DLSS being a crutch to them suddenly being important as soon as AMD had cards that could beat Nvidia’s raster performance.
The 6000 series is faster in Raster but slower in Ray Tracing.
Reviews have been primarily pushing cards based on RT since it has become available. nVidia has a much larger marketing budget than AMD, and ever since they have been able to leverage the fact they have the fastest Ray Tracing, AMD share has been noise diving.
I don’t see this happening with both consoles using AMD, honestly I could see Nvidia going less hard on graphics and pushing more towards AI and other related stuff, and with the leaked prices for the 5000s they are going to price themselves out of the market
Crypto and AI hype destroyed the prices for gamers.
I doubt we ate ever going back the
I am on 5-10 years upgrade cycle now anyway. Sure new shiti is faster but shot from 2 gen ago is still going everything I need. New features like ray tracing are hardly even worth. Lime sure it is cool but what is the actually value proposition.
Lack of competition results in complacency and stagnation.
This is absolutely true, but it wasn’t the case regarding 64 bit x86. It was a very bad miscalculation, where Intel wanted bigger more profitable server marketshare.
So Intel was extremely busy with profit maximization, so they wanted to sell Itanium for servers, and keep the x86 for personal computers.
The result was of course that X86 32 bit couldn’t compete when AMD made it 64bit, and Itanium failed despite HP-Compaq killing the worlds fastest CPU at the time the DEC Alpha, because they wanted to jump on Itanium instead. But the Itanium frankly was an awful CPU based on an idea they couldn’t get to work properly.
This was not complacency, and it was not stagnation in the way that Intel made actually real new products and tried to be innovative, but with the problem that the product sucked and was too expensive for what it offered.
Why the Alpha was never brought back, I don’t understand? As mentioned it was AFAIK the worlds fastest CPU when it was discontinued?
so they wanted to sell Itanium for servers, and keep the x86 for personal computers.
That’s still complacency. They assumed consumers would never want to run workloads capable of using more than 4 GiB of address space.
Sure, they’d already implemented physical address extension, but that just allowed the OS itself to address more memory by enlarging the page table. It didn’t increase the virtual address space available to applications.
The application didn’t necessarily need to use 4 GiB of RAM to hit those limitations, either. Dylibs, memmapped files, thread stacks, various paging tricks, all eat up the available address space without needing to be resident in RAM.
Even successful companies themselves take care in not putting all eggs in one basket on anything they do. Having alternatives is a life saver. We should ensure that we have alternatives too.
This highlights really well the importance of competition. Lack of competition results in complacency and stagnation.
It’s also why I’m incredibly worried about AMD giving up on enthusiast graphics. I have very few hopes in Intel ARC.
I expect them to merge enthusiast into the pro segment: It doesn’t make sense for them to have large RDNA cards because there’s too few customers just as it doesn’t make sense for them to make small CDNA cards but in the future there’s only going to be UDNA and the high end of gaming and the low end of professional will overlap.
I very much doubt they’re going to do compute-only cards as then you’re losing sales to people wanting a (maybe overly beefy) CAD or Blender or whatever workstation, just to save on some DP connectors. Segmenting the market only makes sense when you’re a (quasi-) monopolist and want to abuse that situation, that is, if you’re nvidia.
True, in simple words, AMD is moving towards versatile solutions that is going to satisfy corporate clients and ordinary clients while producing same thing, their apu and xdna architecture is example, apu is used in playstation and Xbox, xdna and epyc used in datacenters, and AMD is uniting btb and btc merchandise for manufacture simplification
I wonder, what is easier: Convincing data centre operators to not worry about the power draw and airflow impact of those LEDs on the fans, or convincing gamers that LEDs don’t make things faster?
Maybe a bold strategy is in order: Buy cooling assemblies exclusively from Noctua, and exclusively in beige/brown.
AMD making cases and fans? First time I’ve heard, even box versions with fans could be made apple way, they can start shipping only SOCs they selling
There’s no non-reference designs of Radeon PROs, I think. Instincts, even less. If the ranges bleed into each other they might actually sell reference designs down into the gamer mid-range but I admit that I’m hand-waving. But if, as a very enthusiastic enthusiast, you’re buying something above the intended high-end gaming point and well into the pro region it’s probably going to be a reference design.
And as a side note finally they’re selling CPUs boxed but without fan.
True you’ve got a point, but i don’t think there’s gonna be many reference designs simply because AMD is cutting expenses as much as possible by selling fabs in the past, simplifying merchandise lineup now, and i guess they’ll outsource as much as possible to non-reference manufacturers as a part of existence cutting measures, they also include outsourcing manufacture to tsmc in the past and opensourcing most of their software stack so community would step in to maintain
Data centers don’t give a shit if your GPU has LEDs. Compared to the rest of the server, the power draw is nearly insignificant. And servers push enough air that imperfect flow doesn’t matter.
They honestly seem to be done with high-end “enthusiast” GPUs. There is probably more money/potential for iGPUs and low/middle level products optimized for laptops.
Their last few generations of flagship GPUs have been pretty underwhelming but at least they existed. I’d been hoping for a while that they’d actually come up with something to give Nvidia’s xx80 Ti/xx90 a run for their money. I wasn’t really interested in switching teams just to be capped at the equivalent performance of a xx70 for $100-200 more.
The 6900XT/6950XT were great.
They briefly beat Nvidia until Nvidia came out with the 3090 Ti. Even then, it was so close you couldn’t tell them apart with the naked eye.
Both the 6000 and 7000 series have had cards that compete with the 80-class cards, too.
The reality is that people just buy Nvidia no matter what. Even the disastrous GTX 480 outsold ATI/AMD’s cards in most markets.
The $500 R9 290X was faster than the $1000 Titan, with the R9 290 being just 5% slower and $400, and yet AMD lost a huge amount of money on it.
AMD has literally made cards faster than Nvidia’s for half the price and lost money on them.
It’s simply not viable for AMD to spend a fortune creating a top-tier GPU only to have it not sell well because Nvidia’s mindshare is arguably even better than Apple’s.
Nvidia’s market share is over 80%. And it’s not because their cards are the rational choice at the price points most consumers are buying at. It really cannot be stressed enough how much of a marketing win Nvidia is.
Yup, it’s the classic name-brand tax. That, and Nvidia also wins on features, like RTX and AI/compute.
But most people don’t actually use those features, so most people seem to be buying Nvidia due to brand recognition. AMD has dethroned Intel on performance and price, yet somehow Intel remains dominant on consumer PCs, though the lead is a lot smaller than before.
If AMD wants to take over Nvidia, they’ll need consistently faster GPUs and lower prices with no compromises on features. They’d have to invest a ton to get there, and even then Nvidia would probably sell better than AMD on name recognition alone. Screw that! It makes far more sense for them to stay competitive and suck up a bunch of the mid-range market and transition the low-end market to APUs. Intel can play at the low-mid range markets, and AMD will slot themselves as a bit better than Intel, and a better value than Nvidia.
That said, I think AMD needs to go harder on the datacenter for compute, because that’s where the real money is, and it’s all going to Nvidia. If they can leverage their processors to provide a better overall solution for datacenter compute, they could translate that into prosumer compute devices. High end gaming is cool, but it’s not nearly as lucrative as datacenter. I would hesitate to make AI-specific chips, but instead make high quality general compute chips so they can take advantage of whatever comes after the current wave of AI.
I think AMD should also get back into ARM and low-power devices. The snapdragon laptops have made a big splash, and that market could explode once the software is refined, and AMD should be poised to dominate it. They already have ARM products, they just need to make low-power, high performance products for the laptop market.
They don’t need to go with ARM. There’s nothing inherently wrong with the x86 instruction set that prevents them from making low power processors, it’s just that it doesn’t make sense for them to build an architecture for that market since the margins for servers are much higher. Even then, the Z1 Extreme got pretty close to Apple’s M2 processors.
Lunar Lake has also shown that x86 can match or beat Qualcomm’s ARM chips while maintaining full compatibility with all x86 applications.
Hence ARM. ARM already has designs for low power, high performance chips for smaller devices like laptops. Intel is chasing that market, and AMD could easily get a foot in the door by slapping their label on some designs, perhaps with a few tweaks (might be cool to integrate their graphics cores?). They already have ARM cores for datacenter workloads, so it probably wouldn’t be too crazy to try it out on business laptops.
The 6000 series from AMD were so great because they picked the correct process node. Nvidia went with the far inferior Samsung 8nm node over TSMCs 7. Yet Nvidia still kept up with AMD in most areas (ignoring ray tracing).
The “disastrous” Fermi cards were also compute monsters. Even after the 600 series came out people were buying the 500 series over them because they performed so much better for the money. Instead of picking up a Kepler Quadra card in order to get double precision you could get a regular ass GTX 580 and do the same thing.
Were the 6000 series not competitive? I got a 6950 XT for less than half the price of the equivalent 3090. It’s an amazing card.
Yes, they were, and that highlights the problem really. Nvidia’s grip on mind share is so strong that AMD releasing cards that matched or exceeded at the top end didn’t actually matter and you still have people saying things like the comment you responded to.
It’s actually incredible how quickly the discourse shifted from ray tracing being a performance hogging gimmick and DLSS being a crutch to them suddenly being important as soon as AMD had cards that could beat Nvidia’s raster performance.
The 6000 series is faster in Raster but slower in Ray Tracing.
Reviews have been primarily pushing cards based on RT since it has become available. nVidia has a much larger marketing budget than AMD, and ever since they have been able to leverage the fact they have the fastest Ray Tracing, AMD share has been noise diving.
I mean I guess? But the question here was about value and no way is RT worth double the price.
It is if that’s the main thing you care about.
Wouldn’t be the first time they did this though, I wouldn’t be surprised if they jump back into the high end once they’re ready.
I don’t see this happening with both consoles using AMD, honestly I could see Nvidia going less hard on graphics and pushing more towards AI and other related stuff, and with the leaked prices for the 5000s they are going to price themselves out of the market
Crypto and AI hype destroyed the prices for gamers.
I doubt we ate ever going back the
I am on 5-10 years upgrade cycle now anyway. Sure new shiti is faster but shot from 2 gen ago is still going everything I need. New features like ray tracing are hardly even worth. Lime sure it is cool but what is the actually value proposition.
If you bought hardware for raytecing, kinda Mehh.
With that being said. Local LLM is a fun use-case
This is absolutely true, but it wasn’t the case regarding 64 bit x86. It was a very bad miscalculation, where Intel wanted bigger more profitable server marketshare.
So Intel was extremely busy with profit maximization, so they wanted to sell Itanium for servers, and keep the x86 for personal computers.
The result was of course that X86 32 bit couldn’t compete when AMD made it 64bit, and Itanium failed despite HP-Compaq killing the worlds fastest CPU at the time the DEC Alpha, because they wanted to jump on Itanium instead. But the Itanium frankly was an awful CPU based on an idea they couldn’t get to work properly.
This was not complacency, and it was not stagnation in the way that Intel made actually real new products and tried to be innovative, but with the problem that the product sucked and was too expensive for what it offered.
Why the Alpha was never brought back, I don’t understand? As mentioned it was AFAIK the worlds fastest CPU when it was discontinued?
That’s still complacency. They assumed consumers would never want to run workloads capable of using more than 4 GiB of address space.
Sure, they’d already implemented physical address extension, but that just allowed the OS itself to address more memory by enlarging the page table. It didn’t increase the virtual address space available to applications.
The application didn’t necessarily need to use 4 GiB of RAM to hit those limitations, either. Dylibs, memmapped files, thread stacks, various paging tricks, all eat up the available address space without needing to be resident in RAM.
Even successful companies themselves take care in not putting all eggs in one basket on anything they do. Having alternatives is a life saver. We should ensure that we have alternatives too.