raspem wrote:I think you missed my point completely. It was actually EXACTLY this that was my point. It was designed for specific purposes, and number crunching or anything else that requires lots of CPU, memory, or disk speed, is not one of them. I just wanted show an example to illustrate this fact.
Nothing is being missed. "The Pi CPU is a slug" kind of discussion has been done to death since well before the Pi was even implemented in beta hardware. I'll leave it as an exercise to those who need the practice as to the correct Google search terms to find the unnecessary discussions that have already been waged. The fundamental point is that the specs for the Pi have been clearly stated by the Foundation on its own FAQ page and eLinux.org Pi pages (with RAM doubling to 512 MB on the Model B in October 2012) since at least February 2012, and no Johnny-come-lately commentary is useful. The BCM2835 SoC was chosen for reasons of cost, cost, and cost. Did I mention the primary criterion was cost? That ruled out using a CPU more recent than the ARM11v6 in the BCM2835. Also, it's not a good idea to broadcast misperceptions that ignore the GPU where there's a newbie audience who might not know a megabyte from a mosquito bite and just wants to have fun learning something, anything about computing hardware, software, algorithms, etc. Ten years ago, the GPU in the BCM2835 would have been considered a supercomputer, classified as a munition, and its design blocked from export from the U.S. because nuclear weapons could be designed using it.
I fully support an animated discussion about performance as long as it's about system
performance and the intended application domain is considered first and foremost. However, this started as yet-another descent into the long-ago abandoned waste of time called specsmanship that's accompanied every new system since the Atanasoff-Berry Computer was born in Iowa in the late 1930s. In more recent times, systems were routinely compared based on a single attribute (e.g., the clock speed wars) without regard for other differences that were often application/use-specific, or many times necessary for legacy compatibility. The latter is often ignored by specsmanshippers in their quest for yet-another iota of incremental performance increase in a single parameter.
As for the speed of the development cycle, if you're building a large project and don't care about becoming a personal medical Superfund site with clogged arteries, diabetes, etc., due to lack of getting up and moving around for hours on-end (this is a major deadly professional hazard for computing developers), don't bother to do cross-compiles on faster hardware. I find it amusing how some complain about the speed of a target system while surrounded by all manner of amazing higher-performance hardware. Often, though, better design will minimize development cycle time through proper segmentation of code into the smallest possible build components. The days of compiling large monoliths of statically-linked code are in the distant past for those who know what they're doing. As computer scientists like to say, late binding is better, and run-time binding is best of all, especially given the nearly-ubiquitous distributed nature of software these days (e.g., the cloud).
Even Charles Babbage faced commentary by the uninvolved when he began working on his Difference and Analytical Engines going back to the 1820s. I suspect Gutenberg suffered comments about how abysmal the performance of his screw press was, and then there's the poor schlub who invented marking of Sumerian clay urns with a stylus to indicate their contents and value. He probably had his invention turned against him as stylii were used to invent graffiti on the freshly-stuccoed interior walls of outbuildings, which would have likely described how to call him for a good time by blowing a sheep's horn a certain number of times. Yes, I'm being more than a bit facetious, but it helps to always look at things from the long view.
There still has been no demonstrated comprehension on the part of some as to what "number-crunching" really is in most computing applications, not just now, but going back to the days of Babbage. His Analytical Engine design reflects his realization that the purely integer arithmetic that his Difference Engines could only perform, even to 30 decimal places, was insufficient. 64-bit integers commonly used in microprocessors today only range to less than 20 decimal digits to put this in perspective. His DEs were unsuitable in a production environment for computing the astronomical tables which they were originally designed to produce (and essentially every other kind, it turns out). To be fair, Babbage never lived to see any of his designs completely built, so the DEs built and operated over the past 20-plus years are pre-alpha prototypes at best, and it's amazing how reliable they are when properly maintained and operated.
However, it's frequently necessary to tweak the coefficients of terms in the linear equations used on the DEs to accurately approximate non-linear function values, often after calculation of less than 100 values. Babbage grew to understand this just through the demonstration of his Beautiful Fragment in his home to Royal Society fellows with names like Darwin, Dickens, and Lisle. Hence his decades of work on the design of the Analytical Engine, a fraction of which remained incomplete at the time of his death. If it had been built, the Analytical Engine would have been not only the first true computer, but the first supercomputer, capable of both 50-place (100-place using double-precision techniques he designed) decimal integer and 50-place (mantissa plus exponent) floating-point calculations. In the mid-1800s, he designed mechanical hardware multiply and divide at least 60 years before it appeared elsewhere in electromechanical calculating machines in the first half of the 20th Century.
So, exactly what kind of "number-crunching" is it that average people can't get done in a reasonable amount of time with the Pi's 24 GFLOPS GPU (IEEE 80-bit standard, with double-precision available)? Most people don't have a use for all of the performance in their computing devices as it is, almost every software application is bloated with features that go unrecognized for their existence/purpose, let alone are used, by probably 95-plus percent of users. It's no accident that large software companies increase their significant investments in commodity hardware component manufacturers (memory, RAM, etc.) before announcing the release of each major product update (notice I didn't say "upgrade").
Most computing activity these days is bound by network congestion anyway, not system internal performance - we're wallowing in an embarrassment of computing riches individually and collectively. Also, the performance in most laptops, desktops, and even mobile devices (which the Pi is, for all intents and purposes) is sitting idle the vast majority of the time, with desktops often consuming electrical power at rates more than 100 ~ 200 times that used by the Pi. It's a crime that all of that capability isn't required to be made available for projects such as molecular biochemistry protein-folding for cancer reaerch when systems otherwise go unused.
I would strongly suggest that everyone join those of us who are improving and extending the software base for the Pi, especially in support of STEM education goals, instead of spending any more time doing specsmanship analysis and, worse yet, defending it. Much of the software developed over the last couple of decades could stand even just a cursory analysis and implementation improvements. Companies like Google have to do this with everything they develop, starting with their home page which is loaded in browsers billions of times each day. That's the kind of concept that's important to ensure students (and some old-timers) are learning in the current and future increasingly network-distributed, media-intensive (especially video and 3-D graphics) world. To paraphrase the desk sergeant at the end of roll call at the beginning of each episode of the "Hill Street Blues" police drama TV show, "There's a lot of cruft - let's be careful out there."
BTW, I've got you young punched-card whippah-snappahs beat by at least 120 years technology-wise if not strictly chronologically by age. I operate and maintain a Babbage Difference Engine where men are men, and are not only the hand-cranking power supply, but also the clock! Go too slow, it jams, go too fast, and it also jams - by Babbage's fully-intentional design. That ensures all 8,000-plus parts are maintained in precise alignment because if they're not, the calculations won't be correct (error detection, 1827 style), and some of those parts can be irreparably damaged if you try to force it. A single replacement part can easily cost $1,500 a pop at a minimum, and that's using CNC lathes, drill presses, and milling machines operating with cutting files generated from 3-D CAD files, with final hand tuning using extremely fine wet Emery cloth. You can actually feel when a jam is coming because the rhythm of the cranking starts going wonky (the technical term). We cranky old fahts call it Babbage's Infernal Goldilocks Machine - you have to crank it juuuust right
Well, II'll get back to work (yes, on a sunny Sunday afternoon, sadly) as I don't want to make this one of my long, rambling, senseless posts ...