Nvidia CEO Predicts 570X Performance Increase in 3 Years
Nvidia CEO Jen-Hsun Huang predicted a whopping 570-fold increase in GPU Compute performance over the next three years, while traditional CPU would merely triple in processing power during that time, according to TG Daily. Huang made his claims during his keynote at the Hot Chips symposium. What could you use such power for? Huang suggests that this massive performance increase could power applications like language translation and augmented reality, in addition to bolstering traditional GPU Compute applications like Oil and Natural Gas exploration, ray tracing, and so forth.
His comment probably needs further explanation and clarification, which we don't have at this time. First, don't expect frame rates in games to get 570 times faster in three years. Huang is talking about general computation applications on the GPU, not traditional graphics acceleration.
Second, let's talk about that 570x number. A GeForce GTX 285--the fastest single GPU Nvidia sells--is a nearly 500-square-millimeter chip that delivers just barely more than one teraflop. In three years, GPU technology will be basically two, maybe three, architectural generations ahead of where it stands today. The chip used in the GeForce GTX 285 is made with a 55-nanometer manufacturing process--the cutting edge in GPUs today is 40nm. In three years we'll be at around 22nm--maybe some sort of 20nm or 18nm half-node type process. From a pure transistor density standpoint, you're only going to get about eight times the transistors on a chip three years from now as you do in a 55nm chip today.
So how do you get a 570x performance increase out of that? I don't know. The stream processing units in GPUs are one of the denser parts of the die, so you might pack in well more than 8 times the processing units responsible for GPU Compute number crunching, but you won't get anywhere near 100x, let alone 570x. Memory bandwidth won't increase by that amount, either.
I think it's likely that Huang is taking into account advances in the flexibility and programmability of future GPU architectures. Support for better flow control, bigger and more robust caches, and lots of other things coming in future GPUs will help developers utilize all the teraflops available to them; it's hard to get anywhere near the theoretical maximum performance out of a GPU today in non-graphics applications. Even so, 570x sounds well beyond what I would expect, and there's probably some missing context--maybe Huang was talking about servers using arrays of GPUs being 570x faster than today's CPU-based servers, or something similar?