Well, it's not absolute, I just wouldn't consider it advisable, unless you're going to put the time into getting the GPU to do the work. You'll probably get jerky video from dropped frames because the (non-GPU) hardware isn't fast enough. I get jerky video playing (low resolution) movies with programs like smplayer but they're OK in omxplayer which uses the GPU. The Pi isn't that fast, video is very demanding, putting all those pixels on the screen and having to redo it 25 times a second or more. The GPU is made for doing that.
From the VideoCore PDF in the intro section there's some hornblowing:
- 25M rendered triangles/s.
- 1G pixels/s with single bilinear texturing, simple shading, 4x multisampling.
- Supports 16x coverage mask antialiasing for 2D rendering at full pixel rate.
- 720p standard resolution with 4x multisampling.
- Supports 16-bit HDR rendering.
- Fully supports OpenGL-ES 1.1/2.0 and OpenVG 1.1.
See
https://docs.broadcom.com/docs-and-down ... G100-R.pdf
Look at Bitcoin (and similar) mining, it's not at all feasible with CPUs. You really want an ASIC but some people do it on monster PCs with several video cards each because even in the i386/586 world video cards (GPUs) are much faster for the limited things they can do. The reason some diehards use GPUs instead of ASICs is versatility, you can load different software for the different algorithms: bitcoin, litecoin, etc., you just use different software. ASICs are hardwired to a certain algorithm.
The GPU boots the Pi, then hands control to the lesser CPU. They share some memory but communication between them is cumbersome. Yet X runs on the CPU, and then has to send some instructions to the GPU. It's technology from 1985, before there were GPUs, but it's also more universal, runs on more different types of hardware.
The assembly language for the GPU is probably worth learning
https://github.com/maazl/vc4asm but it will only work on Broadcom hardware.