How fast (slow) is the Rasperry Pi?


38 posts   Page 2 of 2   1, 2
by raspem » Sun Apr 21, 2013 5:48 am
Holliss wrote:
raspem wrote:The test that takes the longest time to run took 3.2 seconds (CPU time, running on the normal 700 MHz). This puts it between a Pentium II 450MHz (4.2 s) and a Pentium III 800MHz (2.4 s).


raspem, did you run the same test at all the levels of overclocking on the Pi out of interest?


No, but I just tried "turbo" (1GHz). Made it in 2.4 seconds.

... AND it seems it corrupted my memory card. Doesn't boot after I reset it to 700MHz again. :evil:
My other computer is a MacBook Pro
Posts: 26
Joined: Mon Apr 15, 2013 7:14 am
by Snailface » Sun Apr 21, 2013 6:06 am
raspem wrote:
Holliss wrote:
raspem wrote:The test that takes the longest time to run took 3.2 seconds (CPU time, running on the normal 700 MHz). This puts it between a Pentium II 450MHz (4.2 s) and a Pentium III 800MHz (2.4 s).


raspem, did you run the same test at all the levels of overclocking on the Pi out of interest?


No, but I just tried "turbo" (1GHz). Made it in 2.4 seconds.

... AND it seems it corrupted my memory card. Doesn't boot after I reset it to 700MHz again. :evil:

Pi's don't like to be told they are slow. 8-)

Anyway, its common to get corruptions with Full Turbo overclocking. Best just to overclock the CPU, and not the GPU. And remember to treat your Pi with more respect next time -- it hears what you're saying on these boards. ;)
Posts: 17
Joined: Sun Feb 10, 2013 8:32 pm
by shuckle » Sun Apr 21, 2013 8:33 am
From 3.2 to 2.4! Not bad. It is basically 25% improvment.
Posts: 408
Joined: Sun Aug 26, 2012 11:49 am
by jamesh » Sun Apr 21, 2013 9:37 am
simplesi wrote:PS
when I was a lad, we had 6502 instructions sets running on 2Mhz machines

Phiff - 2MHz!!!! Luxury! We only had 1Mhx on our UK101s :) We used to have to LDA with -4, INC until midnight and then we finally got to 255, our dad used to thrash us to within an inch of our life :)


Hey, I'd forgotten, but I used to have a UK101 (actually, it was built by sailing friend [Thanks Ivatt!] and I borrowed it for a while before I got a BBC Micro). Nice bit of kit.
Soon to be unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11516
Joined: Sat Jul 30, 2011 7:41 pm
by simplesi » Sun Apr 21, 2013 10:18 am
OT I Know but
Best just to overclock the CPU, and not the GPU


Got any good settings I can try?

I gave up overclocking 3 months ago as fed up with my SD cards getting trashed but if I could get a bit of a boost back it would be nice :)

Simon
Seeking help with Scratch and I/O stuff for Primary age children
http://cymplecy.wordpress.com/ @cymplecy on twitter
Posts: 1990
Joined: Fri Feb 24, 2012 6:19 pm
Location: Euxton, Lancashire, UK
by pluggy » Sun Apr 21, 2013 12:36 pm
simplesi wrote: We only had 1Mhx on our UK101s :)


Brings back memories, fond memories of a home made wooden case and playing 'Sheep dog trials'
Don't judge Linux by the Pi.......
User avatar
Posts: 2276
Joined: Thu May 31, 2012 3:52 pm
Location: Barnoldswick, Lancashire,UK
by alexeames » Sun Apr 21, 2013 1:32 pm
simplesi wrote:I gave up overclocking 3 months ago as fed up with my SD cards getting trashed but if I could get a bit of a boost back it would be nice :)


Same as that. I stopped overclocking back in September 2012. With multiple Pis and SD cards being swapped between them it works best for me to use the lowest common denominator.
Alex Eames RasPi.TV HDMIPi.com RasP.iO
User avatar
Posts: 2054
Joined: Sat Mar 03, 2012 11:57 am
Location: UK
by shuckle » Sun Apr 21, 2013 3:14 pm
Easy way to overclock is start with raspi-config medium setting.
If it works reliably then just increase cpu frequenzy (arm_freq) and let the other values be. Test with 950 and 1000 etc.
I have 5 different models of raspi and they all went to 1000 and two to 1100.
Posts: 408
Joined: Sun Aug 26, 2012 11:49 am
by pluggy » Sun Apr 21, 2013 3:49 pm
Its hardly a speed demon at 1Ghz. (After an Intel I7 running Ubuntu, everything else is pedestrian) But I want long term reliability so I have most of mine at 700mhz. I've 5 Pis and never had any corruption of their SD cards, which is often the first sign you're pushing it too far.
Don't judge Linux by the Pi.......
User avatar
Posts: 2276
Joined: Thu May 31, 2012 3:52 pm
Location: Barnoldswick, Lancashire,UK
by MarkDaniels » Sun Apr 21, 2013 4:39 pm
"When I was a lad an' dinosaurs roamed t' Earth, computers were nowt more than a few pebbles in t' sand on a primordial beach . . ." The Raspberry Pi, by comparison, is a breath of fresh air. I learnt to code on a Motorola 6502 with 256 bytes (yes, bytes) of RAM!

I can also recall a PCB design CAD programme running on an 8086 at 8 MHz and it was lightning fast. Why? Because the code was well written by hand.

Eventually Moore's Law should run out and what are we going to do then? Cry? Scream? Run away? Hide? I think not! What we are going to do is learn to write good code, again. Probably better than before, or produce compilers that are capable of generating tighter code.

When I first got my hands on a Pi, the performance was truly abysmal, but that was no great surprise, as it needed an awful lot more work to optimise code to run fast on the "new" architecture. I think the Foundation (and others) have achieved an incredible amount in such a short space of time.

@ Jim: I'm not sure that we should stay away from this topic as it serves as a reminder that there is much to be done and helps demonstrate what has been achieved already. You make very good points regarding the GPU of the Pi and perhaps in future the ARM will just be a slave to it, passing the information and letting the GPU do the real work. With regard to it being an education machine, the Pi appears to be ideal for that situation. It does not over-complicate things, but also allows incredible results to be achieved for very little cost, either financial or effort-wise. However, it should also result in a new generation of coders who appreciate the need to optimise their code, which should result in greatly improved performance from machines more powerful than the Pi and result in an extended life for Moore's Law with the results coming from software rather than hardware.

I like the Pi and although I do not class myself as "green" I can appreciate the energy savings that result from running these machines rather than PCs. I have replaced a number of PCs in my business and have two at home for OU course work and do not really notice the lower performance. In fact, they are very often quicker than my PC as they are not bogged down so much by Bloatware.

Finally, Jim, when are we going to see the camera? ;)
Posts: 53
Joined: Sun Oct 28, 2012 2:01 pm
by Bakul Shah » Sun Apr 21, 2013 7:54 pm
OtherCrashOverride wrote:
Engineering is not about an "ideal" design but about "good enough" design [..]

My work ethic revolves around the concept of personal pride in one's work and "good enough" is never good enough; you should always do your best.

You misunderstand. Pride in one's work and a good enough design are not in conflict. Engineers do have a work ethic. What I was getting at is that often people ignore some practical constraints such as time or cost when talking about an "ideal" or "best" solution. Just talking about fast or slow doesn't make sense.

In general, if developers have to work under tighter constraints than the end users (less speed, less memory, smaller screen etc.) they are more likely to be innovative

Developers often get the blame but I feel that statement to be more of an urban legend. The faster and easier it is for someone to get something done, the more likely it will get done. Constraints directly affect productivity and quality. I don't write code on the Pi because of the constraints. I would never get anything done. Using the 'bloated' PC with an IDE allows me to maintain concentration on the objective rather than taking 'compiler breaks'. But to the point, there are additionally managers in the great development circle of life that may mandate something is "good enough" and no further time should be devoted to it as tomorrow's faster computer is a cheaper solution.

Compiler breaks can be helpful in stepping away from the computer and thinking thing through! I too like a quick edit, compile, debug cycle but when you are doing that, you are focused very tightly. It is sort of like looking at the road right in front of your car while driving. You may get used to bloated tools like Eclipse but one can be productive with just vi, cscope and grep or with acme editor & grep. Acme runs fast enough on the Raspi and its neat integration with other tools makes it a joy to use. (Right click on a compiler error message and you are taken to the line in question in the editor window etc.). If compiles take long time, you're going to be more reluctant to add new code and I claim that is usually a good thing. So if you don't want long compiler breaks, write less code!

Raspis are being sold for improving CS education but if seasoned developers don't use them for developing code, then in essence you are telling newbies that Raspis are just toys to play with and not for serious work. If on the other hand you actually tried using them, you're likely to find, improve or write tools that work well on a Raspi.

Phiff - 2MHz!!!! Luxury! We only had 1Mhx on our UK101s

I have used punch cards in real life. They taught you to be correct the first time. Did I win the 'old man' contest? :)

They taught me how to reconstruct a program after dropping its card deck on the floor. The night before the assignment was due : )
Posts: 292
Joined: Sun Sep 25, 2011 1:25 am
by Jim Manley » Sun Apr 21, 2013 11:10 pm
raspem wrote:I think you missed my point completely. It was actually EXACTLY this that was my point. It was designed for specific purposes, and number crunching or anything else that requires lots of CPU, memory, or disk speed, is not one of them. I just wanted show an example to illustrate this fact.

Nothing is being missed. "The Pi CPU is a slug" kind of discussion has been done to death since well before the Pi was even implemented in beta hardware. I'll leave it as an exercise to those who need the practice as to the correct Google search terms to find the unnecessary discussions that have already been waged. The fundamental point is that the specs for the Pi have been clearly stated by the Foundation on its own FAQ page and eLinux.org Pi pages (with RAM doubling to 512 MB on the Model B in October 2012) since at least February 2012, and no Johnny-come-lately commentary is useful. The BCM2835 SoC was chosen for reasons of cost, cost, and cost. Did I mention the primary criterion was cost? That ruled out using a CPU more recent than the ARM11v6 in the BCM2835. Also, it's not a good idea to broadcast misperceptions that ignore the GPU where there's a newbie audience who might not know a megabyte from a mosquito bite and just wants to have fun learning something, anything about computing hardware, software, algorithms, etc. Ten years ago, the GPU in the BCM2835 would have been considered a supercomputer, classified as a munition, and its design blocked from export from the U.S. because nuclear weapons could be designed using it.

I fully support an animated discussion about performance as long as it's about system performance and the intended application domain is considered first and foremost. However, this started as yet-another descent into the long-ago abandoned waste of time called specsmanship that's accompanied every new system since the Atanasoff-Berry Computer was born in Iowa in the late 1930s. In more recent times, systems were routinely compared based on a single attribute (e.g., the clock speed wars) without regard for other differences that were often application/use-specific, or many times necessary for legacy compatibility. The latter is often ignored by specsmanshippers in their quest for yet-another iota of incremental performance increase in a single parameter.

As for the speed of the development cycle, if you're building a large project and don't care about becoming a personal medical Superfund site with clogged arteries, diabetes, etc., due to lack of getting up and moving around for hours on-end (this is a major deadly professional hazard for computing developers), don't bother to do cross-compiles on faster hardware. I find it amusing how some complain about the speed of a target system while surrounded by all manner of amazing higher-performance hardware. Often, though, better design will minimize development cycle time through proper segmentation of code into the smallest possible build components. The days of compiling large monoliths of statically-linked code are in the distant past for those who know what they're doing. As computer scientists like to say, late binding is better, and run-time binding is best of all, especially given the nearly-ubiquitous distributed nature of software these days (e.g., the cloud).

Even Charles Babbage faced commentary by the uninvolved when he began working on his Difference and Analytical Engines going back to the 1820s. I suspect Gutenberg suffered comments about how abysmal the performance of his screw press was, and then there's the poor schlub who invented marking of Sumerian clay urns with a stylus to indicate their contents and value. He probably had his invention turned against him as stylii were used to invent graffiti on the freshly-stuccoed interior walls of outbuildings, which would have likely described how to call him for a good time by blowing a sheep's horn a certain number of times. Yes, I'm being more than a bit facetious, but it helps to always look at things from the long view.

There still has been no demonstrated comprehension on the part of some as to what "number-crunching" really is in most computing applications, not just now, but going back to the days of Babbage. His Analytical Engine design reflects his realization that the purely integer arithmetic that his Difference Engines could only perform, even to 30 decimal places, was insufficient. 64-bit integers commonly used in microprocessors today only range to less than 20 decimal digits to put this in perspective. His DEs were unsuitable in a production environment for computing the astronomical tables which they were originally designed to produce (and essentially every other kind, it turns out). To be fair, Babbage never lived to see any of his designs completely built, so the DEs built and operated over the past 20-plus years are pre-alpha prototypes at best, and it's amazing how reliable they are when properly maintained and operated.

However, it's frequently necessary to tweak the coefficients of terms in the linear equations used on the DEs to accurately approximate non-linear function values, often after calculation of less than 100 values. Babbage grew to understand this just through the demonstration of his Beautiful Fragment in his home to Royal Society fellows with names like Darwin, Dickens, and Lisle. Hence his decades of work on the design of the Analytical Engine, a fraction of which remained incomplete at the time of his death. If it had been built, the Analytical Engine would have been not only the first true computer, but the first supercomputer, capable of both 50-place (100-place using double-precision techniques he designed) decimal integer and 50-place (mantissa plus exponent) floating-point calculations. In the mid-1800s, he designed mechanical hardware multiply and divide at least 60 years before it appeared elsewhere in electromechanical calculating machines in the first half of the 20th Century.

So, exactly what kind of "number-crunching" is it that average people can't get done in a reasonable amount of time with the Pi's 24 GFLOPS GPU (IEEE 80-bit standard, with double-precision available)? Most people don't have a use for all of the performance in their computing devices as it is, almost every software application is bloated with features that go unrecognized for their existence/purpose, let alone are used, by probably 95-plus percent of users. It's no accident that large software companies increase their significant investments in commodity hardware component manufacturers (memory, RAM, etc.) before announcing the release of each major product update (notice I didn't say "upgrade").

Most computing activity these days is bound by network congestion anyway, not system internal performance - we're wallowing in an embarrassment of computing riches individually and collectively. Also, the performance in most laptops, desktops, and even mobile devices (which the Pi is, for all intents and purposes) is sitting idle the vast majority of the time, with desktops often consuming electrical power at rates more than 100 ~ 200 times that used by the Pi. It's a crime that all of that capability isn't required to be made available for projects such as molecular biochemistry protein-folding for cancer reaerch when systems otherwise go unused.

I would strongly suggest that everyone join those of us who are improving and extending the software base for the Pi, especially in support of STEM education goals, instead of spending any more time doing specsmanship analysis and, worse yet, defending it. Much of the software developed over the last couple of decades could stand even just a cursory analysis and implementation improvements. Companies like Google have to do this with everything they develop, starting with their home page which is loaded in browsers billions of times each day. That's the kind of concept that's important to ensure students (and some old-timers) are learning in the current and future increasingly network-distributed, media-intensive (especially video and 3-D graphics) world. To paraphrase the desk sergeant at the end of roll call at the beginning of each episode of the "Hill Street Blues" police drama TV show, "There's a lot of cruft - let's be careful out there."

BTW, I've got you young punched-card whippah-snappahs beat by at least 120 years technology-wise if not strictly chronologically by age. I operate and maintain a Babbage Difference Engine where men are men, and are not only the hand-cranking power supply, but also the clock! Go too slow, it jams, go too fast, and it also jams - by Babbage's fully-intentional design. That ensures all 8,000-plus parts are maintained in precise alignment because if they're not, the calculations won't be correct (error detection, 1827 style), and some of those parts can be irreparably damaged if you try to force it. A single replacement part can easily cost $1,500 a pop at a minimum, and that's using CNC lathes, drill presses, and milling machines operating with cutting files generated from 3-D CAD files, with final hand tuning using extremely fine wet Emery cloth. You can actually feel when a jam is coming because the rhythm of the cranking starts going wonky (the technical term). We cranky old fahts call it Babbage's Infernal Goldilocks Machine - you have to crank it juuuust right ;)

Well, II'll get back to work (yes, on a sunny Sunday afternoon, sadly) as I don't want to make this one of my long, rambling, senseless posts ... :lol:
The best things in life aren't things ... but, a Pi comes pretty darned close! :D
"Education is not the filling of a pail, but the lighting of a fire." -- W.B. Yeats
In theory, theory & practice are the same - in practice, they aren't!!!
User avatar
Posts: 1356
Joined: Thu Feb 23, 2012 8:41 pm
Location: SillyCon Valley, California, USA
by OtherCrashOverride » Mon Apr 22, 2013 5:02 am
We should probably change this thread name changed to "The grumpy old man thread". So here is some more:

How exactly are the evil software engineers supposed to know what is bloat and what is not unless they test and measure? In order to know what features to include or cut, you have to know what the system is capable of. So, yes, threads like this that compare and contrasts performance are necessary. To many of us, as stated before, this knowledge is not forbidden fruit simply because it does not portray the device in the best possible light.

There is a lot more to the Pi ecosystem than just 'the kids'. Its fair to say the Pi gets by with 'a little help from its friends'. This means there are experienced and talented folks working on different aspects that improve the device as a whole that are not employed by the foundation. They do this voluntarily for their own reasons. I have made my own contributions to this body of work.

In a few days [Edit: It will be announced, it will probably be months before backlogs are filled], there will be a new Pi-like device on the market. It will be sold by the same places that currently sell Pi. It will cost slightly more than model B. It will offer a 1Ghz CortexA8 (armv7), 512MB DDR3 memory, 2GB on-board flash, a power button, USB in host and device mode, and run Ubuntu and Android. You can bet I will be spending time on it and thus not contributing that time to the Pi. So the argument that *not* releasing a new/improved Pi will somehow keep the ecosystem strong is a fallacy. As I stated before, I have no need to petition for such a device from the foundation since someone else will make it.

The point to take away from this is that just because someone is not a kid who needs a PI to change their life, does not make them unimportant or discardable. We all come together to make something greater. And just like a garden, sometimes you need to throw a little manure on it to help things grow.
Posts: 582
Joined: Sat Feb 02, 2013 3:25 am