Revival of PC Gaming, PC Gamer United

The recent conflict between Intel and Nvidia moved from the market to the personal arena, where the two competitors are throwing offenses at will. The latest hit comes from one of Nvidia's vice-presidents, in a private e-mail message claiming that the CPU is dead and has long since run out of steam.

The private mail message was intercepted by tech website Inquirer, and holds Roy Tayler's opinions regarding Intel's central processors. The letter is dated April 10, but the final recipient is currently unknown.

"Basically the CPU is dead. Yes, that processor you see advertised everywhere from Intel. It’s run out of steam. The fact is that it no longer makes anything run faster. You don’t need a fast one anymore. This is why AMD is in trouble and it’s why Intel are panicking," Tayler claimed in the message.

"They are panicking so much that they have started attacking us. This is because you do still [need] one chip to get faster and faster – the GPU. That GeForce chip. Yes honestly. No I am not making this up. You are my friends and so I am not selling you. This s*** is just interesting as hell," he continued.

However, Nvidia claims that the above message does not reflect any official stance whatsoever. According to the company’s spokesman Brian Burke, the message is not a public statement and "the views in Roy Tayler's e-mail do not mirror the views of Nvidia."

It might be true that the e-mail message reflects Tayler's own opinions, yet, the company stated a while ago that "you need nothing beyond the most basic CPU," in order to get things done. This means that Nvidia thinks that the CPU might not be dead yet, but it's just one step closer to its grave.

Intel, of course, completely disagrees with Nvidia's allegations. It couldn't be otherwise, given the fact that the company is at the moment the biggest CPU manufacturer in the world and its CPU business accounts for the lion's share of the revenue.

"We believe that both a great CPU and great graphics are important in a PC. Any PC purchase - including the capability level of components inside it - is a decision that each user must make based on what they will be doing with that PC," said Intel spokesperson Dan Snyder

This was taken from Softpedia.com


Comments (Page 2)
3 Pages1 2 3 
on Jul 06, 2008
One thing to bear in mind is that all the graphics card reviews now seem to focus on 1900x1200 and top performance GPUs. Lower (and more relevant for most people) resolutions mean that the CPU performance becomes a bigger factor.


Depends on what you're using the video card for.

For videos, this may be true: A lot of video processing involves scaling, conversion, de-interlacing, decompression, decryption if you're using DRM, and a whole bunch of other stuff that varies based upon resolution.

For games, however, this is not true. Games are resolution independent: They're all mathematical representations of polygons until near the end. Raterization is one of the last steps in the pipeline. By the time you're taking a hit on performance based on resolution, everything is already on the GPU, and the vast majority of difficult calculations have been done.

One thing that people should avoid is mixing up movie and game performance. They have very, very little in common.
on Jul 07, 2008
I seem to recall an article in a PC Gamer or the defunct GFW interviewing Gabe Newell (I believe) about the death of the GPU. Something about why have two processors when they can now be combined into one multicore processor.

Why not have a multicore processor where 4 are cpu's and 2 are gpu's? One processor to rule them all.
on Jul 07, 2008
Look at how big your high end graphics card is and consider just how much would need to be added.
on Jul 07, 2008
I seem to recall an article in a PC Gamer or the defunct GFW interviewing Gabe Newell (I believe) about the death of the GPU. Something about why have two processors when they can now be combined into one multicore processor.

Why not have a multicore processor where 4 are cpu's and 2 are gpu's? One processor to rule them all.


I would agree - I do think it may be possible with multicore becoming the norm that some cores can be specialized. In fact, I think that's the path AMD, who now owns ATI, intends to take.
on Jul 07, 2008
Isn't much of what comes on a graphics card there because it has to be duplicated for the gpu? Just place the gpu in the same chip as the cpu and suddenly you can remove most of the hardware on the graphics card.

Look at notebooks. The graphics card is embedded into the motherboard already and it doesn't add that much more to it. Just slide the processor into the cpu processor and it probably gets easier to make. RAM is already merged together with notebooks. And system RAM is rather cheap and easy to upgrade. I'd add another 2 gigs into my system if it added to my graphics processing.

We just need the software that will understand which cores to access for what it needs.
on Jul 07, 2008
Well there are a few problems with that: graphics embedded in motherboards is much lower performance than discrete graphics cards. Similarly, system RAM is much slower than graphics RAM, and the path from CPU to your system memory is still much less effective than that on a graphics card. Graphics relies very heavily on memory bandwidth.

I like the idea of integrating a GPU into a multicore processor, but it's probably going to start out as just improved performance for the baseline. Getting it into enthusiast machines will likely take a bit more work.

Lastly, there's not much point in deliberately making a multicore GPU. GPU's are inherently parallell anyway. SLI and Crossfire and all that exists only as a sensible commercial alternative to making one incredibly powerful card that wouldn't have enough demand to be markettable.

Meanwhile, Cobra, are you sure about games being resolution independent? That would run contrary to just about every graphics card test I've ever seen. It would also run contrary to my results when I load up a game and change the resolution and watch the frames per second. Perhaps we've misunderstood each-other somewhere here?
on Jul 07, 2008
Well there are a few problems with that: graphics embedded in motherboards is much lower performance than discrete graphics cards.


Tha gap is closing, though. The latest laptops I've seen are starting to come out with pretty impressive graphics. A year ago, I would've completely agreed with you. Today, though, it appears there is a serious push for better graphics on notebooks.

Similarly, system RAM is much slower than graphics RAM, and the path from CPU to your system memory is still much less effective than that on a graphics card. Graphics relies very heavily on memory bandwidth.


Agreed. This is the case even in desktop machines.

Lastly, there's not much point in deliberately making a multicore GPU.


The idea they are discussing is to place the GPU onto a CPU.

Meanwhile, Cobra, are you sure about games being resolution independent?


Yes. Triangles don't get blocky as you scale to higher resolutions. They maintain a sharp edge at each of the three sides, although shading might hide the edge.

It would also run contrary to my results when I load up a game and change the resolution and watch the frames per second.


The frames per second should go down as the rasterizer has to draw more pixels.

My point is not that resolution won't affect your framerate. Indeed, resolution very much affects framerate!

My point is that rasterization happens on the GPU, not the CPU.
on Jul 07, 2008
I would agree - I do think it may be possible with multicore becoming the norm that some cores can be specialized. In fact, I think that's the path AMD, who now owns ATI, intends to take.


Intends to take? They're going to ship one by the end of the year.

Isn't much of what comes on a graphics card there because it has to be duplicated for the gpu? Just place the gpu in the same chip as the cpu and suddenly you can remove most of the hardware on the graphics card.


No. That's not how it works.

A dedicated graphics card does have its own memory bus for its own memory pool. However, these are not duplicates of what the CPU has. Oh yes, the CPU has a memory bus and a memory pool. But the graphics card's memory architecture is designed for one thing: rendering. Blisteringly fast sequential access of memory. Most graphics cards use specialized memory, specialized memory controllers, and specialized memory caches to achieve monstrous performance.

Calling these duplicates of the CPU's version is simply wrong.

AMD's combined CPU/GPU has strengths and weaknesses. One of the biggest weaknesses with actually doing real work (non-graphics related) on the GPU is that talking to the GPU takes a long time. Transferring a texture from main memory to GPU memory and back takes a long time.

With the GPU on the CPU's die, and both of them using the same memory pool, transferring data back and forth is very fast. Dynamically generating textures and meshes on the CPU will be much more performance friendly than with a graphics card.

Of course, the biggest downside is that, without dedicated memory and a huge, high-speed memory bus, GPU graphics performance will be significantly lower than for a dedicated graphics card. The GPU-on-CPU die will need some very smart caching.
on Jul 08, 2008
Woah woah woah, i see alot of people saying CPU's will always need to be rediculously fast, but think about this for a second......

You've seen all those random linux boxes running with very little hardware but puttering on serving webpages forever right? thats not very CPU intensive, not many calculations needed, just simple file transfer really.....

WWW Link

"I expect the relationship between CPU and GPU to largely be a symbiotic one: they're good at different things. But I also expect quite a few computing problems to make the jump from CPU to GPU in the next 5 years. The potential order-of-magnitude performance improvements are just too large to ignore."

He talks about some Folding software that is run on a GPU and CPU, and the GPU completely smokes it. Certain math runs much, much faster on GPU's than CPU's
on Jul 14, 2008
The architecture behind a CPU and GPU are radically different. I mean your talking some pretty intense micro-circuitry design differences in a GPU to handle polygons, shaders, and textures and CPU's to handle pure mathmatical computation.

The guy from NVIDIA is essentially right there's really not alot of growth for CPU's whereas GPU's have alot of room to grow, plus if you add a dedicated physic's processor that probably going to work alot better the closer it is to the GPU.

I think whats going on is Intel screaming out to NVIDIA that there about to seriously enter the graphics market and NVIDIA pretty much answering back go ahead we have years of RnD on you, oh and just like your trying to incorporate the GPU and CPU we can do the same thing on our end. NVIDIA has the upper hand in terms of the technology and Intel in terms of the market should be interesting to see how it plays out over the next 5-10 years.
on Jul 15, 2008
plus if you add a dedicated physic's processor that probably going to work alot better the closer it is to the GPU.


Um, no.

There are two uses for physics in a game. One use is for something that is entirely graphical. Particle systems that don't affect gameplay (outside of obscuring the screen in some way), exploding bits of stuff that don't affect gameplay, etc. That can be done through GPUs entirely, especially now with write-back in GPUs thanks to geometry shaders. You don't need a separate physics processor for it.

The other kind of use for physics is for things that do affect gameplay. HL2, for example. Unfortunately, this is not the kind of physics that you can just say, "Do this," and retrieve the answer afterwards. The game's code, possibly even the script, has to be involved. You need to be able to set rules that are entirely arbitrary. Collision detection needs to be forwarded to the AI and game systems, so that they can assign damage, delete entities, and all of that good stuff. In short, this is not stuff that's good for assigning to a separate processor. There's a reason why physics chips didn't take off.

It is much easier for a game developer to just use more CPU cores to do physics than to use an off-board physics chip.

I think whats going on is Intel screaming out to NVIDIA that there about to seriously enter the graphics market and NVIDIA pretty much answering back go ahead we have years of RnD on you, oh and just like your trying to incorporate the GPU and CPU we can do the same thing on our end. NVIDIA has the upper hand in terms of the technology and Intel in terms of the market should be interesting to see how it plays out over the next 5-10 years.


Yeah, the problem is that, while nVidia has years of R&D for graphics chips, they don't have something that Intel does: the x86 Instruction Set Architecture.

As terrible and annoying as x86 is, it is the closest thing to a lingua franca that assembly has. Millions of man-hours have been invested in compiler design for x86. People have millions of lines of code written in it. There are terabytes of x86 executables out there.

Using x86, or a derivative thereof, as a GPU's shading language may be slightly less inefficient than a specialized shading language, but it's x86. Every time something has gone against x86, claiming better efficiency, it has lost. Why? Because the efficiency difference is never enough to trump the value of x86 and its backwards compatibility to code compiled 15 years ago.
on Jul 15, 2008
The guy from NVIDIA is essentially right there's really not alot of growth for CPU's


There is, actually. They're going parallel, and in the future massively parallel. And if there's anybody who knows all about the technology on microchips, it's Intel. I'm not buying this idea that nVidia has the upper hand in microprocessor design. If Intel wants something done, they have both the R&D and the manpower to do it. They're like a battleship: They may take a while to change to a new direction - but once they've changed direction, watch out - they have a lot of firepower. AMD knows this quite well.

As terrible and annoying as x86 is, it is the closest thing to a lingua franca that assembly has. Millions of man-hours have been invested in compiler design for x86. People have millions of lines of code written in it. There are terabytes of x86 executables out there.


Yup. Totally agreed. Again and again, the x86 has reigned simply because of its tremendous backwards compatibility. Pretty much everything we care about is written for the x86. If you can't beat Intel in x86 code performance, you're never going to penetrate the CPU market, I don't care what new features you bring to the table.
on Jul 15, 2008
Another pc is dead topic, time to shut the fuck up and use some real logic.
on Jul 15, 2008
wasn't this news like 2 months ago?

i think the launch of the GTX 200 series shows just how much hot air nVidia is full of. i'm not saying the HD 4k series is the best thing to ever grace this earth, but for all its talk, nVidia pull a very large citrus fruit out of its back side, put green stickers on it, and tried to tell us it's great.

a note to the general public: nVidia is trying to create a public image. it's the same one that Johnny Knocksville is going for: a big, adolescent jack@$$
on Jul 15, 2008
Another pc is dead topic, time to shut the fuck up and use some real logic.


Pay attention; this is about CPUs, not PCs. Perhaps you should take your own advice.

i think the launch of the GTX 200 series shows just how much hot air nVidia is full of. i'm not saying the HD 4k series is the best thing to ever grace this earth, but for all its talk, nVidia pull a very large citrus fruit out of its back side, put green stickers on it, and tried to tell us it's great.


To be fair, 6 months ago, nVidia's 8800GT was the best price vs. performance there was. Now ATi is (though it isn't nearly enough of an advantage to make me upset at getting an 8800GT). nVidia will take it back next time. That's pretty much how graphics cards go.
3 Pages1 2 3