For very short periods of time, yes. Of course it is proportional to the size of data being worked on, a multi megapixel image may take a few seconds to do a transform on.
Agreed, still not enough to need a more powerfull system (unless you really can not wait a couple of seconds for the entire operation).
Image processing from camera is certainly one, codecs is another. Trying to decode 1080p H265, even on a decent CPU set, will use most of your cycles (and a lot of SDRAM bandwidth) . Voice recognition, AI are other tasks that require a lot of oomph.LOL, and yes. Unfortunately too often the case nowadays. Some of these programs I think could probably be written to run faster in ARM BASIC fully interpreted on an early RISC PC than they do on modern systems.But word processing, webpages etc? If they are slow on a modern CPU then someone is writing some pretty awful software somewhere.
That's an interesting start. The Drystone appears well-suited to the-out-of-order optimizations in the A72. Since the DMIPS score contains no information on floating-point performance, I'm still wondering how HPL would perform.W. H. Heydt wrote: ↑Fri Nov 09, 2018 2:49 pmTake a look at the chart here: https://en.wikipedia.org/wiki/Compariso ... v8-A_cores
The last column (DMIPS/MHz) gives relative instruction rates.A53 is 2.24. A72 is 4.72. So something over twice the IPC rating.
Only once Chrome has used most of it up first.W. H. Heydt wrote: ↑Fri Nov 09, 2018 5:56 pmThe quip was directed at two particular companies, but it could be generalized...
Microsoft can burn up every processor cycle Intel can manufacture.
Slicing does take a few, that is the nature of it. I just do my slicing in the background, and let it take however long it is going to, I am usually waiting for my 3D printer to finnish the previous print anyway, the Raspberry Pi can slice a many many times faster than current FDM 3D printers can print, so if I can pre slice 100 models in the time it takes to print 1 what is the hurry.bensimmo wrote: ↑Fri Nov 09, 2018 7:59 pmOnly once Chrome has used most of it up first.W. H. Heydt wrote: ↑Fri Nov 09, 2018 5:56 pmThe quip was directed at two particular companies, but it could be generalized...
Microsoft can burn up every processor cycle Intel can manufacture.
We need more 'power' because.. just look at games and the visuals. It's not long before the current big things come to a crawl and people (media and shareholders) demand more from a game.
Does it make them better... not really the point.
For most people turning an LED or similar switch on and monitoring some temperatures, then no you don't need anymore, in fact, a Pi Zero is even overkill.
Programming in a nice GUI way to do that need the power of the Pi for the average person and the Pi3 at that.
I assume most the Pi Engineers need more power than the Pi and few actually use it to do any work on?
I could do with more power than the Pi3 and my desktop to slice some 3D object, it's not the rate limiting step there though.
Just now I have to say that MS is not the worst offender.Microsoft can burn up every processor cycle Intel can manufacture.
Maybe it's actually mining bitcoins instead of doing nothing.
Well that is a tough one indeed, if the A72 is a 64 bit only implementation then comparison becomes difficult. The reasoning being that many things will perform better in a 32 bit environment giving everything being equal, and most of what is run on the Raspberry Pi is 32 bit. Remember that with 64 bit you are having to transfer more data to and from memory for many operations, there are other considerations as well that I will not get into here.
We are OK there, its only the recent A76 and friends that are 64-bit only.
Like you I used to be old skool and considered IDE's and syntax highlighting as pointless fluff. A useless gimmick that sucks memory and CPU. A crutch for bad programmers. If you know your language syntax, know the libraries and API's you are using and think about what you are writing why would you need syntax highlighting?Fortunately syntax highlighting can be turned off. Does IntelliJ IDEA include some sort of artificial intelligence or deep neural network to help people write better programs? Can it be turned off?
I've never used a big.LITTLE architecture system. I wonder how the scheduler decides which type of core for each process. I think you are right that one would need to disable the A53 cores in order to test the A72.

With a lot of complexity!
You need people with longer memories. CDC was doing something remarkably like this with the CDC 6600...over 50 years ago. Granted, it only had one "big" processor (the CP), and 8 "little" processors (PPs). The scheme was that the OS ran on PP0 and PP1 through PP7 were for handling I/O. User programs ran on the CP. The CP was a 60-bit machine while the PPs were 18-bit machines.6by9 wrote: ↑Sat Nov 10, 2018 6:26 pmWith a lot of complexity!
There were also two alternative modes of operation for big.LITTLE - either have a matched number of big and little cores and migrate all processes to the corresponding core, or just spin up the big cores on demand (ie the overall load hits a certain threshold) and migrate the heavyweight operations onto a big core.
It took a fair amount of reworking on the scheduler to try to manage the switchover, with hysteresis to avoid bouncing up and down, and yet I still recall having fun on Android phones when the platform team messed it up.
https://en.wikipedia.org/wiki/Heterogen ... processing for a reasonable description of the principles.
big.Little has benefits where you are really needing to power save (eg mobile devices). The Pi isn't really aiming at that market, so I'm not sure there is a significant gain from the added complexity.
It looks like schedtool can be used to set processor affinity and force an application to run on any subset of cores you want. Maybe there is also a way to force specific Linux kernel worker threads onto specific cores. I expect for performance reasons you generally want to stay on the same core when switching rings into a system call. I wonder if anyone with A72 hardware will agree to run some performance tests.W. H. Heydt wrote: ↑Sat Nov 10, 2018 9:57 pmSo...why not run the Linux kernel on a "little" core and use other "little" cores to handle things like USB and other interfaces and run the applications on the "big" cores?