The AMD Trinity
#1
Boost Pope
Thread Starter
iTrader: (8)
Join Date: Sep 2005
Location: Chicago. (The less-murder part.)
Posts: 33,019
Total Cats: 6,587
The AMD Trinity
So I was reading an article recently touting the release of AMD's new Trinity CPU lineup. Cliffs: these chips are what they call APUs- the combination of a CPU and a low-end 3d graphics chipset onto a single die.
I mean, yeah. A quad-core processor running at 3.8 / 4.2 Ghz is obviously going to out-perform a dual-core processor running at 3.3 Ghz. And the gamer-oriented GPU unit will no doubt appeal to that segment of the market who wants gaming performance nowhere near as good as a moderately-priced standalone video card, and is willing not to pay anything at all to get it.
Talk about picking your battles wisely.
The new A10 represents AMD's flagship APU. It's positioned against Intel's Core i3 3220.
Really? The Flagship of the fleet is poised to compete against the third-cheapest processor in the entire Intel Core-i lineup? I mean, yeah. A quad-core processor running at 3.8 / 4.2 Ghz is obviously going to out-perform a dual-core processor running at 3.3 Ghz. And the gamer-oriented GPU unit will no doubt appeal to that segment of the market who wants gaming performance nowhere near as good as a moderately-priced standalone video card, and is willing not to pay anything at all to get it.
Talk about picking your battles wisely.
#3
Elite Member
iTrader: (1)
Join Date: May 2009
Location: Jacksonville, FL
Posts: 5,155
Total Cats: 406
Apparently theyre awesome sauce in netbooks and such.
I was disappointed with bulldozer. It takes 8 cores to keep up with an Ivy bridge quad core.
My now old Phenom II x4 is still the fastest quad core from AMD...
I was disappointed with bulldozer. It takes 8 cores to keep up with an Ivy bridge quad core.
My now old Phenom II x4 is still the fastest quad core from AMD...
#5
Boost Pope
Thread Starter
iTrader: (8)
Join Date: Sep 2005
Location: Chicago. (The less-murder part.)
Posts: 33,019
Total Cats: 6,587
As with the Intel Core processors, the AMD processors come in different versions for laptop vs. desktop use. Because the laptop processors are designed to operate at a a much lower power and generate much less heat, they inherently contain up to 60% less awesome than their otherwise identially-named desktop counterparts.
If you compare a Clarkdale (desktop) i3 to an Arrandale (mobile) i3, you find a lot of similarities. They are both based on the Westmere / Nehalem architecture, they both contain 382 million transistors on an 81 mm² die, they both contain the Ironlake-architecture GPU core, etc. On paper, they appear to be the exact same processor.
Except that the Clarkdale i3-530 consumes 75 watts, while the Arrandale i3-380UM consumes 18 watts. This is mostly accomplished by cranking the clock waaaaaaay down, and the performance reflects this.
Well, I agree and I disagree.
I grant you that nobody building a high-end machine is going to need or want an onboard GPU. No argument there.
On the other hand, of those people who are NOT serious gamers, how many of them are going to CARE whether the CPU has a good on-board GPU?
In other words, I'm not sure who the target market for this chip is.
I just can't imagine that there are a huge number of people who are going to say "Well, I'd like something that has a reasonably powerful CPU, and I also want it to come with a GPU that's a bit better than nothing at all, but not quite good enough to do any serious gaming on."
If you compare a Clarkdale (desktop) i3 to an Arrandale (mobile) i3, you find a lot of similarities. They are both based on the Westmere / Nehalem architecture, they both contain 382 million transistors on an 81 mm² die, they both contain the Ironlake-architecture GPU core, etc. On paper, they appear to be the exact same processor.
Except that the Clarkdale i3-530 consumes 75 watts, while the Arrandale i3-380UM consumes 18 watts. This is mostly accomplished by cranking the clock waaaaaaay down, and the performance reflects this.
I grant you that nobody building a high-end machine is going to need or want an onboard GPU. No argument there.
On the other hand, of those people who are NOT serious gamers, how many of them are going to CARE whether the CPU has a good on-board GPU?
In other words, I'm not sure who the target market for this chip is.
- Serious gamers are going to buy a high-end CPU and pair it with a high-end graphics card.
- Serious computer users who are not gamers are going to buy a high-end CPU and use the crappy-*** onboard GPU that comes with it.
- Everyone else is going to buy whichever CPU is cheapest (Intel Atom / AMD-E series / VIA nano), and not even think about the video chipset.
I just can't imagine that there are a huge number of people who are going to say "Well, I'd like something that has a reasonably powerful CPU, and I also want it to come with a GPU that's a bit better than nothing at all, but not quite good enough to do any serious gaming on."
#6
I just can't imagine that there are a huge number of people who are going to say "Well, I'd like something that has a reasonably powerful CPU, and I also want it to come with a GPU that's a bit better than nothing at all, but not quite good enough to do any serious gaming on."
That is true. The funny thing is, most of those people say they want a "reasonable powerful CPU" because they know the more gigahurtz the better right? When in fact an i3 (or this chip) would be perfectly fine for playing farmville.
The point is, I don't see this as a chip that they are targeting the home-built market with, this is to go in the laptops at Best Buy for epic mafia wars pwnage.
#8
AMD is probably targeting that low to mid-low segment. They can't compete with the Core series, and frankly, the Athlon XP was probably the last series that could compete with all market segments. This chip will probably be pretty successful in the run of the mill browse teh interwebs, check mah emails, what's a gigahurtz? crowd and college kids looking for a cheap notebook. There are still a lot of people and organizations that don't give a **** about PC performance. If you throw a low-power or EnergyStar sticker on there you're already looking pretty good to many people. All of the computers at my shop (USAF) were AMD based until recently. We don't need anything beyond the ability to browse poorly designed websites, read PDFs, and Outlook.
I would imagine having an onboard GPU, memory controller, etc. as seems to be the usual case with mobile oriented platforms does wonders in saving fabrication costs for OEMs beyond the obvious power savings/battery life improvements.
I would imagine having an onboard GPU, memory controller, etc. as seems to be the usual case with mobile oriented platforms does wonders in saving fabrication costs for OEMs beyond the obvious power savings/battery life improvements.
#10
Boost Pope
Thread Starter
iTrader: (8)
Join Date: Sep 2005
Location: Chicago. (The less-murder part.)
Posts: 33,019
Total Cats: 6,587
But again, it raises the question as to why they would bother putting a (relatively) high-performance GPU onto the die. Doing this inherently raises the cost, lowers the yield, and raises the power dissipation of the chip. And for what? I honestly just can't see buyers in this market segment knowing or caring whether they CPU has a better on-board GPU than the equivalently-targeted CPU from Intel.
I would imagine having an onboard GPU, memory controller, etc. as seems to be the usual case with mobile oriented platforms does wonders in saving fabrication costs for OEMs beyond the obvious power savings/battery life improvements.
The big difference is that the Intel chips feature very basic GPUs that satisfy the requirements of normal 2d apps without making any serious concession to 3d gamers. They don't waste silicon and watts on stuffing the CPU with shaders-o-plenty that will never get used in most applications.
And that, again, circles back around to why this doesn't make sense for AMD. Putting a higher-performance GPU onto the main die increases the cost of the CPU and decreases its thermal efficiency, without providing any obvious benefit that I can see insofar as attracting market share.
#11
Boost Pope
Thread Starter
iTrader: (8)
Join Date: Sep 2005
Location: Chicago. (The less-murder part.)
Posts: 33,019
Total Cats: 6,587
See above.
The people who go to BestBuy to buy a desktop PC simply aren't going to care how many more polygons-per-second the Trinity's onboard GPU can shade as compared to the integrated HD Graphics GPU of a comperable Intel Core. They want to know three things:
How many gigahertz does this have?
How much does it cost?
Can I use it to (play farmville / send emails / look at cat pictures / etc)?
The people who go to BestBuy to buy a desktop PC simply aren't going to care how many more polygons-per-second the Trinity's onboard GPU can shade as compared to the integrated HD Graphics GPU of a comperable Intel Core. They want to know three things:
How many gigahertz does this have?
How much does it cost?
Can I use it to (play farmville / send emails / look at cat pictures / etc)?
#13
Boost Pope
Thread Starter
iTrader: (8)
Join Date: Sep 2005
Location: Chicago. (The less-murder part.)
Posts: 33,019
Total Cats: 6,587
As a matter of historical precedent, software has always grown in size and complexity to fill the capacity of the hardware, even when such growth serves no useful purpose other than cosmetic appeal. (See Windows Vista / 7 "Aero" modes, etc.)
Once a point is reached at which the presence of a moderately heavy GPU can be assumed in even the lowest-end machines, then even the simplest flash-style applications will require a heavy GPU. That'll be another -1 for laptop battery life.
#14
Is this maybe a future proofing type move that provides video hardware for some benefit other than gaming? Maybe hardware decoding of 1080p formats or something along those lines? Home theater is one of the applications mentioned on the press page.
I'd like to think there is a reason a company would invest that kind of money into a platform, but then again, this being AMD/ATI, the merged provider of the mediocre, so I can't be sure.
I'd like to think there is a reason a company would invest that kind of money into a platform, but then again, this being AMD/ATI, the merged provider of the mediocre, so I can't be sure.
#15
Boost Pope
Thread Starter
iTrader: (8)
Join Date: Sep 2005
Location: Chicago. (The less-murder part.)
Posts: 33,019
Total Cats: 6,587
From what I can gather, the GPU section of the Trinity processors is essentially a scaled-down port of the Radeon HD6900 series architecture. The vast majority of the A10's GPU footprint consists of 384 shader processors. This is essentially just a large array of tiny little subprocessors optimized for SIMD (Single Instruction, Multiple Data) execution, which is a fancy way of saying "Do the exact same operation to a million pieces of sequential data all in a row."
They are great for tasks which lend themselves to massively parallel execution. The most commonly-known of these, of course, is doing pixel-shading and texture processing in 3d games. A 1920 x 1080 display, for instance, consists of 2,073,600 pixels, and for each frame you can process as many of them in parallel as you have the computational resources to handle. A GPU with 1,024 shader cores can process that whole screen in 2,025 blocks, with each block consisting of 1,024 pixels all getting rendered and dumped out into memory at the same time, and then having another 1,024 pixels loaded up right behind them. Or put another way, it can finish rendering the scene 2,025 times faster than a single processor handling one pixel at a time, and thus provide a framerate 2,025 times as high.
There are other tasks which lend themselves well to this sort of computational architecture, but they're mostly the kind of applications that you'd typically throw a supercomputer at. Things like brute force cryptography or modelling the folding of proteins in a cell. (In fact, many so-called supercomputers these days are in fact arrays built out of huge numbers of gamer-grade video cards loaded into commodity PCs with consumer-grade CPUs.)
By comparison, decoding and playing back a compressed video stream is a task not well-suited to this method of execution, and most computers available today already have sufficient resources in the main CPU to do it quite easily. Heck, a lot of high-end cellphones these days can play high-quality video, and their processors are absolute weaksauce by comparison to even an entry-level Atom.
#17
I'm certainly no GPU guru, but I do tend to take things mentioned on press pages with a grain of salt. Most corporate marketing departments are simply tasked with throwing as many buzzwords as possible at the product which can even remotely be construed as having some relevance to it. In theory, the Miata could be described as being suitable for use as a military transport vehicle, although it would not be particularly good as this task as compared to pretty much every imaginable alternative up to and including the Dacia Sandero.
From what I can gather, the GPU section of the Trinity processors is essentially a scaled-down port of the Radeon HD6900 series architecture. The vast majority of the A10's GPU footprint consists of 384 shader processors. This is essentially just a large array of tiny little subprocessors optimized for SIMD (Single Instruction, Multiple Data) execution, which is a fancy way of saying "Do the exact same operation to a million pieces of sequential data all in a row."
They are great for tasks which lend themselves to massively parallel execution. The most commonly-known of these, of course, is doing pixel-shading and texture processing in 3d games. A 1920 x 1080 display, for instance, consists of 2,073,600 pixels, and for each frame you can process as many of them in parallel as you have the computational resources to handle. A GPU with 1,024 shader cores can process that whole screen in 2,025 blocks, with each block consisting of 1,024 pixels all getting rendered and dumped out into memory at the same time, and then having another 1,024 pixels loaded up right behind them. Or put another way, it can finish rendering the scene 2,025 times faster than a single processor handling one pixel at a time, and thus provide a framerate 2,025 times as high.
There are other tasks which lend themselves well to this sort of computational architecture, but they're mostly the kind of applications that you'd typically throw a supercomputer at. Things like brute force cryptography or modelling the folding of proteins in a cell. (In fact, many so-called supercomputers these days are in fact arrays built out of huge numbers of gamer-grade video cards loaded into commodity PCs with consumer-grade CPUs.)
By comparison, decoding and playing back a compressed video stream is a task not well-suited to this method of execution, and most computers available today already have sufficient resources in the main CPU to do it quite easily. Heck, a lot of high-end cellphones these days can play high-quality video, and their processors are absolute weaksauce by comparison to even an entry-level Atom.
From what I can gather, the GPU section of the Trinity processors is essentially a scaled-down port of the Radeon HD6900 series architecture. The vast majority of the A10's GPU footprint consists of 384 shader processors. This is essentially just a large array of tiny little subprocessors optimized for SIMD (Single Instruction, Multiple Data) execution, which is a fancy way of saying "Do the exact same operation to a million pieces of sequential data all in a row."
They are great for tasks which lend themselves to massively parallel execution. The most commonly-known of these, of course, is doing pixel-shading and texture processing in 3d games. A 1920 x 1080 display, for instance, consists of 2,073,600 pixels, and for each frame you can process as many of them in parallel as you have the computational resources to handle. A GPU with 1,024 shader cores can process that whole screen in 2,025 blocks, with each block consisting of 1,024 pixels all getting rendered and dumped out into memory at the same time, and then having another 1,024 pixels loaded up right behind them. Or put another way, it can finish rendering the scene 2,025 times faster than a single processor handling one pixel at a time, and thus provide a framerate 2,025 times as high.
There are other tasks which lend themselves well to this sort of computational architecture, but they're mostly the kind of applications that you'd typically throw a supercomputer at. Things like brute force cryptography or modelling the folding of proteins in a cell. (In fact, many so-called supercomputers these days are in fact arrays built out of huge numbers of gamer-grade video cards loaded into commodity PCs with consumer-grade CPUs.)
By comparison, decoding and playing back a compressed video stream is a task not well-suited to this method of execution, and most computers available today already have sufficient resources in the main CPU to do it quite easily. Heck, a lot of high-end cellphones these days can play high-quality video, and their processors are absolute weaksauce by comparison to even an entry-level Atom.
#18
Actually, I think you may have a point here.
As a matter of historical precedent, software has always grown in size and complexity to fill the capacity of the hardware, even when such growth serves no useful purpose other than cosmetic appeal. (See Windows Vista / 7 "Aero" modes, etc.)
Once a point is reached at which the presence of a moderately heavy GPU can be assumed in even the lowest-end machines, then even the simplest flash-style applications will require a heavy GPU. That'll be another -1 for laptop battery life.
As a matter of historical precedent, software has always grown in size and complexity to fill the capacity of the hardware, even when such growth serves no useful purpose other than cosmetic appeal. (See Windows Vista / 7 "Aero" modes, etc.)
Once a point is reached at which the presence of a moderately heavy GPU can be assumed in even the lowest-end machines, then even the simplest flash-style applications will require a heavy GPU. That'll be another -1 for laptop battery life.
Thread
Thread Starter
Forum
Replies
Last Post
Quality Control Bot
Gaming
674
05-26-2023 01:34 AM