its a design/hardware limit, that means the gpu can only access the lower 1gig of ramTheMindVirus wrote: ↑Sat Jul 04, 2020 12:23 amI'm assuming this limitation is one of firmware as this kind of allocation is theoretically possible and already works on x86 systems to share large portions of system memory with PCIe-connected GPU's. Since they can access the memory directly, the CPU is not involved in any framebuffer copy operations which are a waste of time and resources. Is there any plan to implement a zero-copy mechanism on Raspberry Pi firmware?
Zero copy is supported in that you allocate the buffer using DRM or similar. Those are then allocated from the CMA heap, and you can pass them back in to DRM to display them.cleverca22 wrote: ↑Sat Jul 04, 2020 12:27 amits a design/hardware limit, that means the gpu can only access the lower 1gig of ramTheMindVirus wrote: ↑Sat Jul 04, 2020 12:23 amI'm assuming this limitation is one of firmware as this kind of allocation is theoretically possible and already works on x86 systems to share large portions of system memory with PCIe-connected GPU's. Since they can access the memory directly, the CPU is not involved in any framebuffer copy operations which are a waste of time and resources. Is there any plan to implement a zero-copy mechanism on Raspberry Pi firmware?
on the pi4, i believe all of the 2d planes for rendering come from gpu_mem, but those dont total up to very much
it will heavily depend on what resolution of display your trying to drive, and what graphical tasks your trying to do
bensimmo wrote: ↑Sat Jul 04, 2020 3:34 pmit's 'restrictive' beacuse that how it started and legacy softhard-ware-things need it. That's why it is there.
As the RPT skilled Engineering team keep at it, more 'things' may be able to move across.
But what I don't understand, is why you need more?
The camera doesn't need it.
I doubt the VC6 can push stuff around quick enough to make use of it either for the display.
The iGPU of Intel are dynamic too, they sit low until needed and then they don't use a lot.
My main GPU 1660 To thing is only 6GB.
What do you want to push around it in that sort of 'GPU or GPGPU tasks?
or are you just trying the learn (as I am)
What are you on about? This is word salad, not a coherent post.TheMindVirus wrote: ↑Sat Jul 04, 2020 12:57 pmSo it begs the question, on a scalable system, why even have legacy restrictive settings like gpu_mem on the standard image in the first place?
It would be part of the allocation procedure to make sure larger address spaces translate to < 4GB.
(By PCIe I meant it is present in x86 systems. I wasn't referring to the Pi and Raspberry Pi OS which has its own mechanisms for this purpose).
It's good that there are mechanisms like the AXI bus and DMA to make sure the CPU doesn't have to be used for things it doesn't have to do.
(e.g. Kernels, repetitive and Real-Time tasks - the responsibility of a Programmable Real-Time Unit (PRU) as found on the BeagleBone Black).
I'm thinking about the following:
- How many applications are actually able to make full use of the underlying hardware optimisation mechanisms?
- Is there a way to temporarily monitor the usage of these mechanisms without affecting performance?
- Aside from a 64-bit version of the Raspberry Pi OS and Gentoo, is it possible to use a workflow such as Arduino to safely program the Pi?
- Say that further down the line I have a Pi with 8 CPU cores, 8GB SDRAM and a SoC which had more hardware that required system RAM...
...would each additional piece of hardware (e.g. 8K video encoders) be able to access higher portions of system memory with the same address space as < 1GB?
In short, this post is entirely nonsense. You are not doing yourself, or anyone else, any favours with these stream of consciousness postings. Please make sure you have your facts straight.Moonmarch wrote: ↑Sat Jul 04, 2020 5:07 pmTheMindVirus is asking, does adjusting GPU memory to 4 GB equals 4 GB of GPU RAM? Even if you have access to 4 GB of GPU memory, what program will use 4 GB of GPU memory? I can imagine a scenario where you were designing a map full of buildings, and textures in a 3D modeling program. You would need access to more RAM. I saw a video on YouTube that involved the Raspberry Pi 4 8 GB model using GIMP image editor. Even GIMP editing a 4K picture on a Raspberry Pi 4 computer did not use 8 GB of RAM. Here is what I know about graphics engines on the Raspberry Pi computer. The graphics engine needs to be usable on ARM computers.
Even if you find a graphics engine for Linux OS, this graphics engine is intended for X86 or X64 computers. You run the Quake Engine using OpenGL on the Raspberry Pi computer. The Quake engine will use less than 128 MB of GPU memory, probably 32 MB of GPU memory. Even after the hardware on the Raspberry Pi computer improves, does not mean software will utilize the additional hardware. That is the real limitation of open source software. Most open source software involves software that was released over 20 years ago. The computer hardware standard from the early 2000's is completely different from the present times. The maximum GPU memory split on the Raspberry Pi 4 computer is 944 MB.
I'm not entirely sure what that means.
None of which is related in any way with the topic of this thread. Even the bits that are vaguely correctMoonmarch wrote: ↑Sat Jul 04, 2020 6:52 pmHere is what I know about OpenGL on the Raspberry Pi computer. Eric Anholt developed the OpenGL drivers for Raspberry Pi computer. Eventually Eric Anholt was replaced by a team developing the Vulkan drivers for the Raspberry Pi computers. Here is a link to the article on the raspberrypi.org website:
VC4 and V3D OpenGL drivers for Raspberry Pi: an update
https://www.raspberrypi.org/blog/vc4-an ... an-update/
Before the OpenGL drivers were released on the Raspberry Pi computer. I don't know what was graphic libraries were being used to be honest with everyone. When you load a game on the Raspberry Pi computer. Let's say I started up OpenArena, you will see your GPU information.
Here is a picture of a Doom Engine map editor program:
GZDoom Builder 2.3
Call me crazy, go ahead. I see the source code of user made maps released by the author of the mod. These maps are huge, involves many geometries. You runs these mods using a source port of a game. For example GZDoom is used to run Doom 1993 engine games. There are maps that are real intricate in design. I'm going to say the maps take a long time to build, perhaps years who knows. Running certain Doom mods using the GZDoom program will require a fast computer. Editing the map of the Doom mod will require even more computer resources. I don't know if that made sense to anyone, I at least understand the context.
I'm not sure what planet some of us are on but I think it's absolutely related to the topic.None of which is related in any way with the topic of this thread. Even the bits that are vaguely correct