Problems in general are broken down into manageable pieces. So are computers and everything that goes with it.
Assembly language arguments. No there are not really any. You have general purpose registers which are really no different than variables in a high level program. You assign values to them and just like any language there is a fixed set of things you can do to them at the lowest level for that language, for assembly you can do things like add, subtract, and, or, xor, etc. As well as use them to hold addresses or data when performing a store or load.
How do we use that to make the computer do things well first off there is the processor core, the registers and the operations you can do on that core. Then outside that is a layer where addresses and data and read/write controls leave the processor, this address space can be just plain memory itself, or sometimes rom and often peripherals. Like a uart, or usb controller, disk controller, video, etc.
These peripherals are blobs of logic themselves that have layers, a uart for example manages the timing of the signals on the uart tx output as well as sampling and decoding or at least trying to the signal coming into the rx input. It converts to and from serial and parallel, and provides registers with which we talk to the uart. These are called registers which is the correct name but are a little different than the registers in the cpu core. The registers we often call general purpose registers in the cpu core are wired to the logic that does adding and subtracting and memory operations and such. The registers in the uart are wired to the logic in the uart that does shifting and clock dividing and decoding and such. At the next higher level there needs to be address decoding between the processor core and the peripherals. To mail something to your mom it has to leave your house and make it to your moms house this requires some higher level infrastructure, the post office, drivers, delivery people, and an address encoding that everyone understands and uses. Same here to get data to/from the peripherals we have that infrastructure built into the chips as well as between chips.
Video is just another one of these layers, a bit more complicated in that you need a lot of bits in order to maintain a separate color for each pixel for many pixels. You need a way to determine within that memory which pixels belong to which rows and columns and such. Within the monitor you need a way to display those things, etc. This is somewhat "simplified" by again defining a transport mechanism between the video card and the monitor. Going back to analog televisions and ntsc and pal and such which defined the number of scan lines how the data was timed, what the voltage levels mean, how to know the beginning of a scan line or when or what a refresh looks like and so on. Then the tv people work their side of that interface and the source people work their side. Over time we have created more definitions to handle a larger number of pixels and colors and higher refresh rates. but the fundamentals have not changed in any way. The processor does its thing, programmers know how to program the processor and are given an address map of where the peripherals are. The peripherals are designed to do a job and with internal layers of tasks being divided up they do those jobs, they also provide a local map for programmers on how to control that peripheral. Peripherals with external interfaces conform to some agreed upon protocol, usb, serial/uart, spi, i2c, video (hdmi, vga), etc.
Older video cards the main cpu had to do all the video processing, deciding each pixel and color. But now most video cards have processors on board that can do that for you depending on what you are doing. There is a GPU on the raspberry pi's main chip that can assist you. Even if your arm software determines every pixel and color you still put it in a shared memory space so that the GPU can drive the actual video peripheral for you. We dont have direct exposure to that peripheral. Similar to having the postal worker deliver a card to your mom for you rather than you delivering it yourself.
Doesnt matter how old or new the technology is, we can generally only handle so much information at one time, so we break problems down into managable parts, we divide the tasks up among different individuals or groups and define interface specifications between each member or team and then go off and implement each of those parts. Even when writing software application ourselves it makes sense to break the problem into manageable parts. Computers are built and operate this way as well. cpu, peripherals, address and data busses between them. Even the transistors and wiring between them are done in this way. There is very little if any magic to it...And it is all easily understandable once you realize there is no magic...