Revival of PC Gaming, PC Gamer United

The recent conflict between Intel and Nvidia moved from the market to the personal arena, where the two competitors are throwing offenses at will. The latest hit comes from one of Nvidia's vice-presidents, in a private e-mail message claiming that the CPU is dead and has long since run out of steam.

The private mail message was intercepted by tech website Inquirer, and holds Roy Tayler's opinions regarding Intel's central processors. The letter is dated April 10, but the final recipient is currently unknown.

"Basically the CPU is dead. Yes, that processor you see advertised everywhere from Intel. It’s run out of steam. The fact is that it no longer makes anything run faster. You don’t need a fast one anymore. This is why AMD is in trouble and it’s why Intel are panicking," Tayler claimed in the message.

"They are panicking so much that they have started attacking us. This is because you do still [need] one chip to get faster and faster – the GPU. That GeForce chip. Yes honestly. No I am not making this up. You are my friends and so I am not selling you. This s*** is just interesting as hell," he continued.

However, Nvidia claims that the above message does not reflect any official stance whatsoever. According to the company’s spokesman Brian Burke, the message is not a public statement and "the views in Roy Tayler's e-mail do not mirror the views of Nvidia."

It might be true that the e-mail message reflects Tayler's own opinions, yet, the company stated a while ago that "you need nothing beyond the most basic CPU," in order to get things done. This means that Nvidia thinks that the CPU might not be dead yet, but it's just one step closer to its grave.

Intel, of course, completely disagrees with Nvidia's allegations. It couldn't be otherwise, given the fact that the company is at the moment the biggest CPU manufacturer in the world and its CPU business accounts for the lion's share of the revenue.

"We believe that both a great CPU and great graphics are important in a PC. Any PC purchase - including the capability level of components inside it - is a decision that each user must make based on what they will be doing with that PC," said Intel spokesperson Dan Snyder

This was taken from Softpedia.com


Comments (Page 3)
3 Pages1 2 3 
on Jul 15, 2008

I like my CPU. I think it's as important as my graphics card. And it works better.

on Jul 15, 2008
Beacause the QC Extremes are so underpowered.... yeah i really need to take my own advice...
on Jul 21, 2008
To be fair, 6 months ago, nVidia's 8800GT was the best price vs. performance there was. Now ATi is (though it isn't nearly enough of an advantage to make me upset at getting an 8800GT). nVidia will take it back next time. That's pretty much how graphics cards go.


this is true, no doubt. i almost pressed the button on a G92 8800GTS myself. and i'll admit that i'm biased against nvidia. i just don't like the way they do business. between the confusing over-abundance of product variations, the UMAP program, their penchant for public trash talking, and their implementation of SLI tied to their horrible chipsets, i'm just over them. unfortunately eVGA, XFX and BFG tech seem to me some of the best AIB partners around, while several of ATI's partners have earned reputations for poor product support. c'est la vie.
on Jul 21, 2008
The real point is the growth path of the GPU is much higher than the CPU.

There simply are not enough applications forcing you to need faster and faster CPUs, where as on the graphics end there is still plenty of need.

What's missing from this email however, is that CPUs now need to start getting much more parellel in nature. The limiting factor there has been software design and complexity. To me that's the path that needs to happen for the CPU folks, make it easier to do multi-processor friendly software.

on Jul 21, 2008
The CUDA (Compute Unified Device Architecture) tools within the new line of Nvidia's graphic cards are suppose to assist with intensive computational process that is not graphics related. The thing is many people don't need that type of processing power for their applications. The process that require such computational powers are like computation of matters related to fluid dyamics or astrophysics. But then..if you work in those areas and you really need such intense computational processes help, there are always super computer which can be accessed via the World wide web.
on Jul 22, 2008
The CUDA (Compute Unified Device Architecture) tools within the new line of Nvidia's graphic cards are suppose to assist with intensive computational process that is not graphics related. The thing is many people don't need that type of processing power for their applications. The process that require such computational powers are like computation of matters related to fluid dyamics or astrophysics. But then..if you work in those areas and you really need such intense computational processes help, there are always super computer which can be accessed via the World wide web.


Some companies that don't want their data transferring over the internet, as it isn't secure. They will benefit as their own computers would see the computational benefits. Besides, most groups who are doing intense tasks such as those generally have large funds backing them or are associated with large companies who can provide indefinite amount of cash.
on Jul 22, 2008
The CUDA (Compute Unified Device Architecture) tools within the new line of Nvidia's graphic cards are suppose to assist with intensive computational process that is not graphics related.


CUDA is basically a way to translate C into nVidia's native instruction set. it was mainly developed to run PhysX, but it makes nvidia GPUs capable of performing a great many types of task. this is what led nvidia to "trumpet the death of the CPU," but it's far from perfect as a central processor (and their claim was mainly marketing fluff). mainly highly parallelized applications heavy on floating point operations will really benefit significantly. i guess in theory if they ever manage to get the SP clocks into the gigahertz range, but i wouldn't bet on seeing that anytime soon, since from what i understand, the development of GPUs tends to favor more SPs rather than faster SPs. some applications might always benefit from a single, faster processor, and 'multi-tasking' can only go so far, at least for the home user.

of course, there are far greater experts on this than i, so take it with a grain of salt.

The thing is many people don't need that type of processing power for their applications.


more applications are starting to be multi-threaded, but i don't think there will be many that can use all 240 stream processors on a GTX 280.

The process that require such computational powers are like computation of matters related to fluid dyamics or astrophysics. But then..if you work in those areas and you really need such intense computational processes help, there are always super computer which can be accessed via the World wide web.


well, yes and no. some problems are too complex even for IBM's roadrunner and its 1.0 petaFLOPS. take standford's fold@home client. if you're unfamilar with this distributed computing project, it's designed to use your home desktop's unused computation cycles to help us understand the way proteins fold and mis-fold, hopefully helping us (well, stanford researchers anyway) find cures for a number of diseases. this is where CUDA comes in. it'll allow you to run a GPU version of the F@H DCP as well as one on your CPU(s).

and since i'm on the subject, i gotta plug my home team:
www.hardfolding.com

on Aug 04, 2008
I seem to recall an article in a PC Gamer or the defunct GFW interviewing Gabe Newell (I believe) about the death of the GPU. Something about why have two processors when they can now be combined into one multicore processor. Why not have a multicore processor where 4 are cpu's and 2 are gpu's? One processor to rule them all.


Here is the first proof that this will take place...okay, rumored proof. But still, AMD is supposedly going to combine the CPU and GPU into one processor chip. It should be interesting to see how it works out.

http://www.tgdaily.com/html_tmp/content-view-38703-135.html

Sorry, forgot the link.
on Aug 04, 2008
i don't see that article as "proof" necessarily. it could simply be that they're slapping a CPU and GPU onto the same die the way intel slapped two dual cores onto the same die for its quad core.

does that matter? well, not to joe consumer, at least probably not. technically this isn't the death of the CPU or GPU, but rather the death of the discreet graphics card. there's potential to lower latency for CPU-GPU communication, but GPU performance will be severely limited by its need to rely on DDR2 memory (phenoms don't support DDR3, which is still far from optimal for graphics, that's why all graphics cards moved to GDDR in the first place).

when i read "it will be a dual-core phenom wth an R800 graphics core," i interpret "budget solution." so the technical specifics won't matter to the consumer aiming at the bottom line, but tech savvy enthusiasts and quality OEMs will still be using dedicated discreet graphics cards for a while yet.

what nVidia would have you believe is that GPU architecture can completely supplant the CPU, and what AMD here has is by contrast a fraternal-Siamese core. all they're really doing is moving their IGPs off the motherboard and into the CPU die.

/$0.02
3 Pages1 2 3