State409c Suspended 19558 Posts user info edit post |
http://arstechnica.com/news.ars/post/20061025-8070.html
Hmmm...I don't know what to think about this. Will it get more economical to buy new cpu/gpus more often than just buying new vid cards when needing to upgrade the video side of things? 10/25/2006 1:02:31 PM |
xvang All American 3468 Posts user info edit post |
I don't get it. So they are basically saying you won't need a video card anymore because dedicated GPU's will be processing the graphics? I don't see any performance benefits. Maybe cost benefits, but that's about it. 10/25/2006 1:36:09 PM |
agentlion All American 13936 Posts user info edit post |
Quote : | "I don't see any performance benefits. Maybe cost benefits, but that's about it." |
ummm.... so, what's the problem. If someone were to tell you that you can get the same performance for a lower price, is that not a good thing?10/25/2006 1:39:55 PM |
State409c Suspended 19558 Posts user info edit post |
You don't see the performance benefit of being able to talk in silicon, rather than over a bus? 10/25/2006 1:41:29 PM |
gs7 All American 2354 Posts user info edit post |
Quote : | "ummm.... so, what's the problem. If someone were to tell you that you can get the same performance for a lower price, is that not a good thing?" |
10/25/2006 2:25:49 PM |
synapse play so hard 60939 Posts user info edit post |
this is kinda old news isn't it?
isn't this a major reason why AMD bought ATI in the first place? 10/25/2006 2:49:04 PM |
agentlion All American 13936 Posts user info edit post |
well, i'm going to revisit and expand on that statement.
Quote : | "So they are basically saying you won't need a video card anymore because dedicated GPU's will be processing the graphics?" |
Video cards right now already use a dedicated GPU. That's how the cards are run - they have a GPU and some memory, then the PCI/AGP/PCI-E bus back to the northbridge, then to the CPU and main memory. The bottleneck with that system is the bus connecting the RAM, CPU, and GPU. Buses keep getting faster, like PCI-E, but still - you're talking about wire buses on the order of 10s of centimeters long on a PCB with a fixed width datapath (e.g. 64bits).
With the Fusion, we're basically coming full circle here. In early computers, all graphics processing was done by the CPU, shared with all the other processing that the CPU did. As graphic requirements increased, GPUs were created to create a seperation of powers of sorts to give graphics their own dedicated processor as to not drag down the CPU, so they were put into separate chips. Whenever you have to separate chips (i.e. separate silicon), you have to create an external bus between them, which since then has been the bottleneck for graphics performance, and basically requires you buy 2 separate, powerful processors for your computer.
Theoretically, the GPU was probably never fully necessary - it was possible to keep graphics processing on the CPU as added functionlity, or as a separate core on the same die. But to keep die size down (read: power and heat requirements) and costs down (read: better silicon yield for each the GPU and the CPU than a larger, combined chip) at the time it made more sense to separate them.
Moore's law now allows us to use a single ~1cm square die and fit extremely powerful CPU and GPU cores on the same die, using with a smaller physical footprint and power requirements than either a single CPU or GPU die of a couple years ago. In addition, with separate CPU and GPU cores on the same die, the bus bottleneck problem is taken care of. Now, there is virtually unlimited bandwidth between the CPU and GPU, as the datapath is as wide as can be fit on the die, the bus can run at the speed of the processor instead of orders of magnitude slower, and the distance between the cores is measured in micrometers and millimeters instead of centimeters.
Ironically, for much of the history of the computer, the CPU has been becoming more and more specialized, pulling CPU functionality off into other dedicated chips to allow the CPU to grow and become more powerful. Now, as we begin to reach practical limits of consumer CPU functionality, I think we'll start to see more functions pulled back onto the CPU to increase performance - functions that just a few years ago were pulled off the CPU, also in the name of performance.10/25/2006 3:03:03 PM |
Noen All American 31346 Posts user info edit post |
^I agree for the most part with what you said, but this isn't coming "full circle".
It's more that this the beginning of another consolidation period. With tech you have the cycle of new technology -> expanding size (physically) as it adapts to the existing form factors -> new form factors -> size reduction and consolidation, and loop it.
This is much in the same vein as bringing the memory controller on-die, multi-core processors on a single die, and the "all-in-one" chip design trend in general. In another year or two as these really start hitting the mainstream, new tech will hit and start the expansion phase again.
The "old" way of CPU video processing really was completely different from what Fusion should be (speculating of course). I think the idea is moving from generic multi-core CPU's to specialized multi-core CPU's. So each core isn't sharing the entire workload, but instead each core is highly specialized to do one sort of common task. Which, if well designed, would make for MUCH larger performance gains in a variety of tasks. 10/25/2006 3:17:33 PM |
quagmire02 All American 44225 Posts user info edit post |
i like this thread 10/25/2006 6:05:29 PM |