You're forgetting the C in CPU. While the GPU does all the rendering, it needs the CPU to feed it. Higher resolution, more feeding to do, and the more stuff you can include in graphics that need CPU support.
Wrong. I'm not a game developer, but I have written a few hardware accelerated 3D applications. The only situation where your example is of any importance is in real-time 3D graphics, and in those cases what is being fed to the GPU is primarily vector data which is entirely resolution independent. You can feed the GPU the exact same data whether it is being rendered to a 360 x 640 display or a 2560 x 1440 display... it makes no difference. In fact, in the overwhelming majority of cases, the CPU isn't even aware of the display's resolution. That is how irrelevant the display resolution is to the CPU.
Where display resolution does come into play is in the final stages of the rendering pipeline. This involves things like rasterization, pixel shaders and anti aliasing, but this is all the domain of the GPU. The CPU isn't involved in any of those activities.
Video encode (after editing) and image editing? That's CPU. If you don't think Cinemagraph would work faster (if written properly) on a quad core, you're sorely mistaken.
First, note that I was referencing only those statements I had quoted. It's a fact that most smartphone video processing
is not done on the CPU. End of story. Obviously, we'll always be able to find a specific app with a specific feature where the CPU plays a more prominent role. However, that isn't a point worth making.
My point was that general purpose CPU cores aren't involved in nearly as many operations as most people assume they are, particularly on smartphones.
Second, your certainty in regard to where video encode and image editing occurs suggests that you don't work in the software industry. Blanket statements of that sort are almost always wrong. Even on desktops/laptops more and more of those functions are being shifted away from the general purpose computing cores to more specialized units:
- Photoshop has been shifting ever more image editing computations to the GPU.
- QuickSync technology shows that even Intel believes video encoding is more appropriately done on dedicated hardware than on the CPU.
Considering that Smartphones aren't just computationally constrained but also power constrained, that approach makes even more sense than on desktop/laptops. In fact, that is exactly what is being done.
Here is a blog post from a Qualcomm employee stating precisely that, which I'll quote:
"The technology stage is set for video acceleration in hardware on the phone. It consists of dedicated hardware in the chipset that does the compute-intensive work of
encoding and decoding"
That employee also mentions that the only task the CPU is left with, is shovelling the raw video data to and from the dedicated encode/decode hardware, which is negligible.
The only reason to do any of the things you mentioned on the CPU (particularly since all modern smartphones include dedicated hardware) is portability. If we were developing software to run on many different hardware platforms, but don't want to develop hardware specific software (because developing all those variations of the same thing costs money), then the ARM CPU is the only thing that is guaranteed to exist everywhere.
In regard to Cinemagraph, you may be right, but you may also be the one that is sorely mistaken. All I know is that Scalado is the company that delivers the technology which Nokia uses to build Cinemagraph, and that some of Scalado's features are hardware accelerated (don't use the general purpose CPU) and others aren't. I don't know what applies to Cinemagraph. Do you?
Either way, the situation is a lot more complicated and nuanced than you are making it out to be.
But you can add as big of a GPU as you want, at some point the CPU becomes the bottleneck. You wouldn't pair a 7990 with an Atom, would you?
At some point the GPU will crank out frames fast enough that we'll hit the smartphone screen refresh limit of 60Hz. Whether the software becomes CPU bound before we hit that limit depends entirely on the software we're running. Maybe the CPU will become the bottleneck, maybe it won't.
Not saying the CPU is end-all be-all of performance.
This was my original point. I just wanted to note that the CPU isn't involved in as many things as most people assume. It's involved in even fewer things than you assumed.
I still can't understand why people keep saying: I don't want options! Let it the way it is! I don't need it better!
I can speak only for myself, but as far as I can tell, nobody is saying any such thing... or at least I'm not!
IMHO we're saying we don't want worse options than we already have. We'll gladly take any better options, but just determining that the number 4 > 2 isn't anywhere close to determining what makes a more powerful CPU.