Processing Units of the Future

I’m sure I’m not the first to think about this, but with multi-core/cell processors, will graphics cards eventually become obsolete? After all, why not put one normal core and one graphics core in? And then, for the people who need specialist stuff, one raytracing core, one physics core, one whatever else.

And with people finding that some non-graphics things go faster on graphics cards, perhaps this will lead to things not being written for a specific CPU/GPU/whatever, but instead being told somewhere along the line to go wherever they’ll run fastest.

On the other hand, this would make it impossible for people to upgrade just their graphics card… bah, what do I know? I’m just a web geek.

One Response to “Processing Units of the Future”

  1. JohnO says:

    Well, I’m no hardware geek, but I’ve read a number of the articles on the Cell proccessor, so I can atleast engage in a discussion.

    As I understand it the SPE(graphical units) on the Cell processor are directly on the same die(chip/silicon) as the PPE(normal processor). So you upgrade as a unit. A SPE isn’t so much a ‘GPU’ as it is optimized to do certain types of (streaming) math calculations. A GPU is even further optimized (shaders). SPE’s on Sony’s new Playstation can handle audio as well as graphical, because the math is similar. Today’s GPUs are further optimized (shaders and such) so have a more limited task set (but like your link people are finding more and more things to do with them).

    One thing that slows down the processing speeds is bandwidth to the caches/memory. So if you were to put specific processors on their own boards, they wouldn’t be much of a help, because it would have to shuffle *all* the data over to the CPU through a slow (PCI express, AGP, etc.) bus. It would basically create a humongous bottleneck that would nullify putting the processor there.

    How the Cell accomplishes 7SPE’s with 1PPE is by putting them all on the same die, and giving them access to the same L2 cache. So if you wanted to diversify the SPE’s (physics, graphics, audio, raytracing, etc.) the programmers would have to work extra hard. This is because they would have to code in the event you have one of these cores (and then worry about multi-threading between them), and code in the event that you don’t have one.

    I myself have wondered why they haven’t been able to move multi-threading support into a hardware design issue and not a software issue (I’ve programmed with threads once, and it wasn’t fun. But I certainly don’t know what stands in innovation’s way on this matter). Only when the hardware could decide what cores you have, and how best to run that code (on whichever core) could we get to a level of truly choosing a task dependant processor. But it would absolutely involve moving threads to a hardware problem and not a software problem.

    I’ve gotten most of my info from arsTechnica reviews of the Cell, and a recent write-up (today I believe) of XBox 360 vs Playstation 3 long, detailed, technical, write-up (can’t remember where though)

    Hope you enjoyed what meager info I can offer :)