Heater
Posts: 13878
Joined: Tue Jul 17, 2012 3:02 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 8:15 am

jessie,

"Clock's don't define performance." True enough. That's why we have caches and pipelines and multiple ALU's and out of order execution and God knows what other optimizations in CPU design. And now multiple cores.

However Hz helps :)

I'm pretty sure that speed wall is not imaginary.

Take a look at that graph I linked to above. Notice how it is a logarithmic scale on the MHz axis. Notice how it has been pretty much a straight line up from 1972 until about 2001 where it dramatically levels out and even declines.

Had that graph continued on the path that it started in 1972 a regular PC would be up at over 100GHz today!

Clearly we are not even close. Your 3 and 4 GHz machines are a testament to that. Worse still, as you point out you have been running that for 3 years already. In that time you should have been able to replace it with a machine over twice as fast.

"Its all about IPC and core count these days."

Exactly, and that's because we can't physically get more Hz. At least until someone has come up with a serious technology change, if ever.

Throw into the mix that ARM and mobile have raised expectations of small size and low power consumption and we see Intel has no where to go except follow ARM's lead.

DrDominodog51,
Just as PowerPC faded away x86 will too.
I don't see how that is relevant. The PowerPC never had an overwhelming majority, of the computer market. It never dominated anything. One might as well have said MIPs or Sparc or Alpha.
Memory in C++ is a leaky abstraction .

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 24130
Joined: Sat Jul 30, 2011 7:41 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 8:58 am

The graph was always going to start to level off, just because of physical limitations. But the reasons is levels off so rapidly isn't just down to physical limitations (because we haven't reached the ultimate limit yet), but, as Jessie said, market conditions. One of which - there is no need on the vast majority of desktops for an 8Ghz processor. There really isn't. 99.9% of the market is served well enough with multiple core slower devices (which are cheaper to design, make, and run).

We certainly haven't hit a technical wall for processor speed (although it gets closer). Just a cost/benefit wall to it. If a single core 8Ghz device costs $1000, but a quad core 3Ghz device costs $100, which do you think is going to sell? Modern OS's and SW make fairly good use of those multiple cores, so you have about 12Ghz of performance for a 10th of the price of 8Ghz. It's a no brainer.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I think it’s wrong that only one company makes the game Monopoly.” – Steven Wright

achrn
Posts: 376
Joined: Wed Feb 13, 2013 1:22 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 9:17 am

jamesh wrote:there is no need on the vast majority of desktops for an 8Ghz processor. There really isn't. 99.9% of the market is served well enough with multiple core slower devices (which are cheaper to design, make, and run).
While this is true, it is annoying to find yourself in the 0.1% - I spend a good chunk of my time inverting large square matrices that, though sparse, mostly do not respond especially well to multi-frontal approaches. So most of the time this 8-core machine sits at 12% cpu utilisation. My last two machines have each been slightly slower at this main task than the one that came before.

On the bright side, they provide eye-candy and let me browse forums at full speed while I'm waiting.

Heater
Posts: 13878
Joined: Tue Jul 17, 2012 3:02 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 10:04 am

I think we are arguing the same thing from a slightly different angle.

Let's assume there is a physical limit to processors clock speeds. With the technology we can imagine at the moment that might be 100GHz or a 1000HGHz. No matter what the number is the limit is there. You are saying that is the the physical "wall" and we are far from reaching it. Perhaps so.

We could also assume there is an asymptotically rising cost in terms of economics and/or energy to get to that limit.

That means you are going to give up trying long before you reach the limit. It's just not possible economically and/or energetically. That is the point where the cost outweighs the benefit. Perhaps so much so that you cannot actually pay the cost even if you didn't care what it is.

So, effectively the speed wall is reached long before you actually get to the theoretical maximum possible.

My argument is simply that we reached that point already back in 2002 or so.

Clearly Intel and Co agree.

Shame. Progress was looking good as we switched computer building from cog wheels to relays to tubes to transistors and integrated circuits.

What next? There does not seem to be any place to go.

Aside: As I shine a laser pointer onto the surface of a CD and it reflects an array of red dots around the room, that system seems to be performing a very large Fourier Transform in a matter of nano seconds. That's what I call computing.
Memory in C++ is a leaky abstraction .

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 24130
Joined: Sat Jul 30, 2011 7:41 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 10:25 am

Heater wrote:I think we are arguing the same thing from a slightly different angle.

Let's assume there is a physical limit to processors clock speeds. With the technology we can imagine at the moment that might be 100GHz or a 1000HGHz. No matter what the number is the limit is there. You are saying that is the the physical "wall" and we are far from reaching it. Perhaps so.

We could also assume there is an asymptotically rising cost in terms of economics and/or energy to get to that limit.

That means you are going to give up trying long before you reach the limit. It's just not possible economically and/or energetically. That is the point where the cost outweighs the benefit. Perhaps so much so that you cannot actually pay the cost even if you didn't care what it is.

So, effectively the speed wall is reached long before you actually get to the theoretical maximum possible.

My argument is simply that we reached that point already back in 2002 or so.

Clearly Intel and Co agree.

Shame. Progress was looking good as we switched computer building from cog wheels to relays to tubes to transistors and integrated circuits.

What next? There does not seem to be any place to go.

Aside: As I shine a laser pointer onto the surface of a CD and it reflects an array of red dots around the room, that system seems to be performing a very large Fourier Transform in a matter of nano seconds. That's what I call computing.
It's not a technical wall - that's the point. (Although the physical wall isn't that far away I don't think).

Nowhere to go? Quantum computers? Massively parallel, but user programmable chips? (parallax). Easier FPGA type programming for the end user so you can write your code and make it run in HW? Clock speed isn't the be all and end all.

As tech gets better of course, we may well see a rise in clock rates, but I think it will still be heading towards parallelism as the best/cheapest means of getting lots of work done)
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I think it’s wrong that only one company makes the game Monopoly.” – Steven Wright

blc
Posts: 465
Joined: Mon Sep 05, 2011 9:28 am

Re: Are ARM based computers the future?

Fri Feb 28, 2014 12:30 pm

I think that in terms of home computing, the ARM revolution is already happening: the proliferation of tablets over the last couple of years, all being powered by ARM chips, has gone a long way towards helping that. People generally don't want a whacking great power hungry beast under their desk just to access Facebook and Gmail - a tablet is more than adequate for most.

However, I highly doubt that Intel or x86 will ever go away, especially when it comes to the server room and high performance computing. As has been discussed so extensively in this thread, ARM chips as they stand simply cannot compete with the raw power of x86 chips, and if they want to play catch-up then they will have to become almost as complex and power hungry as x86 chips. It's been widely predicted for years that ARM will take over the data centre, and it just isn't happening - at least not yet. ARM servers may find a niche in applications that require less CPU grunt, but you won't be running an enterprise-class server room with ARM servers and that's where the big money is. Much of the software out there still relies heavily on a high level of single core/single thread performance and aren't easy to parallelise (or the performance doesn't scale across multiple cores/threads). Intel still win the single core/thread performance war hands down.

There is always going to be a small minority of people who will want, or need, to have high-power x86 systems, whether at home or at work. I'm one of those: my work environment is purely Office/Windows/Microsoft, so an ARM machine running Office/MS applications through something like WINE is needless complexity - especially when I'm writing code that has to be compatible with a wide range of Windows-based machines or Windows-based servers. The company I work for (about as "enterprise class" as you can get) will never switch from Windows/x86 because there simply isn't the level of support, experience and maturity that a Linux/ARM ecosystem would demand. I play a lot of games on PC and if you're really serious about PC gaming then that means Windows and x86 (at least until Valve convince more big-name publishers to start supporting Linux and move away from DirectX, but even then you probably won't have games compatible with x86 and ARM). Yeah sure there are cross-compatibility wrappers available for DirectX & Linux, but the performance suffers and honestly... why bother with all the fussing about and configuration when it'll just work under Windows without any messing around? Even the two biggest names in console hardware have moved away from custom architectures and are now using an x86 chip at the core of their machine - you won't find these chips at a PC parts retailer, but it's still an AMD x86 chip running both the Xbox One and PS4.

I'm staring down the barrel of a near £900 upgrade for my PC at the moment, and... yeah... that's a lot of money, but a lot of the stuff that I do as a hobby/for fun will be much more enjoyable after spending that money. Plus, I have the money available and it's time for an upgrade, so I'm going to make sure that I get the best I can with the budget I have. I'm even ditching my ARM chromebook for a Core i5 ThinkPad because it's too restrictive (I'm well aware that you can install Linux on the Samsung ARM Chromebook, but it's hacky and not a pleasant user experience).

Heater
Posts: 13878
Joined: Tue Jul 17, 2012 3:02 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 12:43 pm

Jamesh,

"technical wall". This is really confusing things. "technical wall", "physical wall", "economic wall".

It all amounts to the same thing. We hit hit about 2002.

Quantum computers: Something I know nothing about. Last I read they were not going to be of much benefit to general purpose computing, only making some specific jobs quicker. Then there is the case of D-Wave where they are still trying to figure out if what Google bought from them is a quantum computer or not!

Parallelism is the best bet at the moment. Sadly it does not suite a large class of problems. Amdahl's law is there to bite us as well :)
Memory in C++ is a leaky abstraction .

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 24130
Joined: Sat Jul 30, 2011 7:41 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 1:44 pm

Heater wrote:Jamesh,

"technical wall". This is really confusing things. "technical wall", "physical wall", "economic wall".

It all amounts to the same thing. We hit hit about 2002.
Physical wall. The point at which universal physical limitations stop things from going any further. For example, once you get to track widths of one atom wide (exaggerating, but that a physical limitation).

Technical wall. We cannot (yet) make tracks widths that small. But may be able to in the future.

Economic wall. Where the expense to do something that is within our technical wall is too high for it to be done.

No. It most certainly does't amount to the same thing. Let's use space travel as an example.

We have had the technically ability to go to the Moon since the 60's but not the money to do it. Economic wall.
We don't quite have the technical ability to go to Mars. Technical wall (and economic)
With our current understanding, we will not be able to travel to a different star in less than 4.5 years. Physical wall.

All these of course fit with our current physical understanding. Which may or may not change.
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I think it’s wrong that only one company makes the game Monopoly.” – Steven Wright

Heater
Posts: 13878
Joined: Tue Jul 17, 2012 3:02 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 2:07 pm

Space is a great example.

I take the view that we hit that wall already.

We might make it to Mars. Makes no difference, it's still in our back yard relatively speaking.

The nearest stars at about 4 light years away are the other side of the wall. Perhaps we could get some hulking great space ship up to one thousandth the speed of light. So 4000 years to get there. Call me pessimistic but that just seems unlikely.

Of course those few stars around us at that kind of distance are very likely not actually hold anything useful. Like a nice cozy planet to live on.

That means we are in for a problem orders of magnitude bigger to get to the next ones.

So we hit the space wall and nobody noticed, same with the compute speed wall it seems.
Memory in C++ is a leaky abstraction .

totoharibo
Posts: 4246
Joined: Thu Jan 24, 2013 8:43 am

Re: Are ARM based computers the future?

Fri Feb 28, 2014 3:46 pm

PiRanger wrote:I've been reading a few articles where the authors state that they believe that ARM based computers are the future. What do people here think? Are we going to see an influx of more powerful ARM based mini-computers, or will we be seeing major ARM based computers on the desktop taking over?
x86 is CISC powerfull instructions : 1 instruction takes many clockcycles
arm is RISC reduces instruction set : 1 instruction = 1 clockcycle.

RISC is less complicate so consumes less. And is not yet at its limits.
CISC comes to limit needs a lot of power (with fans).

And compilers do a better job with CISC as they are less complicated.

jamesh
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 24130
Joined: Sat Jul 30, 2011 7:41 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 3:51 pm

Heater wrote:Space is a great example.

I take the view that we hit that wall already.

We might make it to Mars. Makes no difference, it's still in our back yard relatively speaking.

The nearest stars at about 4 light years away are the other side of the wall. Perhaps we could get some hulking great space ship up to one thousandth the speed of light. So 4000 years to get there. Call me pessimistic but that just seems unlikely.

Of course those few stars around us at that kind of distance are very likely not actually hold anything useful. Like a nice cozy planet to live on.

That means we are in for a problem orders of magnitude bigger to get to the next ones.

So we hit the space wall and nobody noticed, same with the compute speed wall it seems.
Pretty sure Einstein noticed the space speed wall in about 1905.

Making it to Mars does make a difference - a big asteroid strike doesn't wipe out the whole human race. Two asteroid strikes, one on each planet would be the 'http://en.wikipedia.org/wiki/Buggers'.

Worthwhile reading the Wikipedia page on the Orion spaceship.

"Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c).[2] An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by Fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure Matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% to 80% of the speed of light. In each case saving fuel for slowing down halves the max. speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, this would allow the ship to travel near the maximum theoretical velocity.[16]
At 0.1c, Orion thermonuclear starships would require a flight time of at least 44 years to reach Alpha Centauri, not counting time needed to reach that speed (about 36 days at constant acceleration of 1g or 9.8 m/s2). At 0.1c, an Orion starship would require 100 years to travel 10 light years. The astronomer Carl Sagan suggested that this would be an excellent use for current stockpiles of nuclear weapons.[17]"
Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
“I think it’s wrong that only one company makes the game Monopoly.” – Steven Wright

Heater
Posts: 13878
Joined: Tue Jul 17, 2012 3:02 pm

Re: Are ARM based computers the future?

Fri Feb 28, 2014 4:27 pm

@jamesh,

Oh yeah. That Einstein was a clever dude. People should take notice. Perhaps that's too depressing, better have faith in the Star Trek or Star Wars future.

One asteroid strike is unfortunate, two asteroid strikes would be careless :)

Those propulsion figures sound impressive. We are still in for a long ride to nowhere. There is unlikely to be a safe place to exist at the other end of the trip. That's if you can survive ploughing through interstellar dust at a significant fraction of the speed of light.

Turns out Carl Sagan was wrong. We have just about used up all the nuclear weapons grade material from the old USSR. It has provided ten percent or so of power for the USA since the collapse of the USSR. It is now proposed to use the American half of the cold war stock piles to do likewise. All that and we didn't have enough to get back to the moon even.

@totoharibo,
x86 is CISC powerfull instructions : 1 instruction takes many clockcycles
arm is RISC reduces instruction set : 1 instruction = 1 clockcycle.
This has not been true for years now. Both RISC and CISC can dispatch more than one instruction per cycle.

I have always understood that compilers do better with RISC. That being because it was just to hard to use a lot of odd instructions. In fact a big part of the idea of RISC was to remove any silicon that was wasted implementing instructions that are never or rarely used by compilers.

Having said that, the Intel i860 was a fast RISC machine. Well, it would have been fast if anyone could have figured out how to deal with it's parallel execution units and pipelines. Compilers were just not smart enough to optimize code for it and performance suffered. It flopped as a result.
Memory in C++ is a leaky abstraction .

User avatar
Lob0426
Posts: 2198
Joined: Fri Aug 05, 2011 4:30 pm
Location: Susanville CA.
Contact: Website

Re: Are ARM based computers the future?

Fri Feb 28, 2014 5:15 pm

I have been running multi-core machines for many years now. I have rarely seen more than two of the cores being used. I have forced applications onto the other cores (affinity) and saw some performance gains. But that is the key, I had to "force" it. And consumer versions of software rarely allow you to use persistent affinity settings. Server software does allow persistent settings.

Additional cores, above two, really are a marketing tool rather than a real gain for most of us. A ploy to separate us from more of our money. These extra cores (by inverse order) are actually shutdown in favor of turbo boost technology, when you really start to load your system up. This is done to keep the total wattage below the packages design TDP. So setting affinity could actually cause poorer performance under some circumstances.

So why have the extra processors if you shut them off to turbo boost?
I would rather have that die area used for additional caching or better yet move some of main memory onto the actual die!

At some point ARM will start to use larger packages to raise their TDP. This will need to be done if they want to perform in the "desktop" performance level.
512MB version 2.0 as WordPress Server
Motorola Lapdock with Pi2B
Modded Rev 1.0 with pin headers at USB

http://rich1.dyndns.tv/
(RS)Allied ships old stock to reward its Customers for long wait!

billio
Posts: 71
Joined: Thu Dec 15, 2011 8:25 am
Contact: Website

Re: Are ARM based computers the future?

Fri Feb 28, 2014 7:23 pm

Apart from the technical difference between ARM and Intel, there is a significantly different business model.

ARM = a design company, licensing intellectual property to the chip makers.
Intel = an end-to-end design and manufacturing company.

At the moment the ARM approach has driven it's phenomenal growth inside small systems. I am not sure that approach will work in desktops and servers due to the significantly smaller number of devices.

However, it is a good thing to have two almost diametrically opposing, competing approaches, as one drives the other to new and better things. If one approach "won", development would inevitably slow down. For a parallel scenario, just think of Internet Explorer.

User avatar
Jim Manley
Posts: 1600
Joined: Thu Feb 23, 2012 8:41 pm
Location: SillyCon Valley, California, and Powell, Wyoming, USA, plus The Universe
Contact: Website

Re: Are ARM based computers the future?

Fri Feb 28, 2014 11:00 pm

The reason that parallel processing isn't used much more commonly than it could be is that the tools to make best use of it don't yet exist. Dedicating a single core to an application is an extremely coarse-grained, dumb-bunny way to under-utilize an entire core, just as most applications don't push even the single core on most systems to anywhere near their limits. Most people aren't efficiently using the abundance of processing power available even in their cell phones, yet ads for quad-core, GHz-plus processors in the latest-and-greatest devices abound.

It would be a great idea if the device/system/OS manufacturers/producers put one of the many distributed-processing clients on systems (e.g., protein-folding used to find matches for disease/cancer-causing entities so that drugs can be custom-built to fight them, [email protected], etc.) and set them to run when the devices/systems weren't otherwise doing something useful (such as when being charged for an hour or so at night, and then just lying around receiving NTP updates, e-mail, Tweets, etc., until it's time for the alarm to go off). At least those distributed software packages have been designed to make maximal use of multiple processors, whether they're idle cores, underutilized CPUs/GPUs, etc.

The world is really much more parallel than almost anyone seems to realize from a compute perspective, not to mention reality, where there are over seven billion people on Earth and, based on recent discoveries within just 100 light years of Earth so far, there are now likely to be 100 million Earth-like planets in our Milky Way galaxy alone, which is one of a 100 billion-plus galaxies of which we're even aware. The lack of awareness of how to best use parallel computing is partly because (autonomic systems keeping our hearts beating, lungs breathing, etc., aside) our puny conscious minds are pretty much limited to focusing on a single train of thought (i.e., a pipeline) at a time. It's not clear what's really going on when we sleep, but reinforcement in long-term memory of what was learned during the day may be at least one thing going on in parallel. Apparently, chasing squirrels may be done in parallel during dogs' dreams, and thinking of diabolical ways to screw with humans is almost certainly going on massively in parallel in cats' dreams :lol:

A U.S. Department of Defense study concluded that only about one out of three professional software engineers working on massively-parallel systems have the abilities needed to think rigorously enough to be able to develop efficient parallel code. The most productive software developers, in general, are an average of 30 times as productive (in terms of correct lines of code produced per unit time after bugs were eliminated as much as was perceivable) as the average software developer, and if only about a third of them can produce efficient parallel-processing software, well, you can kinda see where the bottleneck is. If anything, our computing education efforts should be helping today's students to at least become aware of the need to use our computing infrastructure for things a lot more meaningful than relentlessly posting photos of foofy caffeinated beverages to dozens of social networking sites all day and night long.

Even if they never touch a line of code, those who become workers and professionals of any kind should learn to ask how their organizations can vastly improve the use of distributed computing in parallel. We don't teach drivers education in order to turn everyone into a world-class race-car driver, but there is a greater good in making people aware of the limitations of vehicles, thoroughfares, and especially drivers to improve use of transportation resources. Likewise, we should be impressing on everyone the value of using computing infrastructure to its maximum potential, to include seeking out opportunities to take advantage of distribution and parallelism.
The best things in life aren't things ... but, a Pi comes pretty darned close! :D
"Education is not the filling of a pail, but the lighting of a fire." -- W.B. Yeats
In theory, theory & practice are the same - in practice, they aren't!!!

User avatar
DougieLawson
Posts: 36540
Joined: Sun Jun 16, 2013 11:19 pm
Location: Basingstoke, UK
Contact: Website Twitter

Re: Are ARM based computers the future?

Sat Mar 01, 2014 12:22 am

Jim Manley wrote:The reason that parallel processing isn't used much more commonly than it could be is that the tools to make best use of it don't yet exist.
We've had multi-processing since 1964 with the IBM S/360-M65MP.

The problem with most small computers (and the fundamental difference from mainframes) is that they all tend to have a single I/O device, with a single channel for read/write. Mainframes are highly parallel when it comes to I/O, that's how we can get them running 100,000 transactions per second (with a transaction being a process with thousands of machine instructions plus some I/O operations).

Until we can get microcomputers running highly parallel I/O operations we're going to have to accept that the CPU will be spinning waiting for I/O completion at some times (depending on how well we can schedule the multi-tasking).

We need to remember that multi-processing (more than one core (in modern day parlance)) is NOT EQUAL to multi-tasking (time-slicing the processor between tasks on a ready queue vs a wait queue).

If we look at the raw speed of mainframes (in terms of x86_64 processor frequency) you'll be shocked at how slow they appear to run. The 100,000 transactions/sec are possible by taking the I/O system off the main processor on to "channel processors" with another layer of parallelism.

Back in 1997 with a very early version of Linux on a S/390 G6 processor they got 30,000 copies running on one mainframe running VM. Each "virtual penguin" wasn't much more powerful than a Raspberry Pi. Again it was the massively parallel I/O subsystem that allowed that to be possible.
Note: Having anything humorous in your signature is completely banned on this forum. Wear a tin-foil hat and you'll get a ban.

Any DMs sent on Twitter will be answered next month.

This is a doctor free zone.

Heater
Posts: 13878
Joined: Tue Jul 17, 2012 3:02 pm

Re: Are ARM based computers the future?

Sat Mar 01, 2014 12:51 am

DougieLawson,

This whole "mainframe" myth has to die.

It's no good talking 100,000 transactions per second. What on earth is a transaction?

Reality is that guys like Google, Twitter, FaceBook and co. who really have to handle heavy loads, the likes of which "mainframes" have never seen, do not do it with "mainframes".

No, they do it with shed loads of microprocessors.

By the way, what is a "mainframe"? Does any one still make them?
Memory in C++ is a leaky abstraction .

User avatar
DougieLawson
Posts: 36540
Joined: Sun Jun 16, 2013 11:19 pm
Location: Basingstoke, UK
Contact: Website Twitter

Re: Are ARM based computers the future?

Sat Mar 01, 2014 12:56 am

A transaction is what happens when you get $10 from an ATM.

ATMs can't work without a centralised database else how would the customer from New York be able to get $10 from the ATM in LA? Centralised databases can't run without a mainframe.

The old dinosaur ain't dead yet.

And that old dinosaur uses less power than hundreds of thousands of X86_64 blades.
Note: Having anything humorous in your signature is completely banned on this forum. Wear a tin-foil hat and you'll get a ban.

Any DMs sent on Twitter will be answered next month.

This is a doctor free zone.

User avatar
Jim Manley
Posts: 1600
Joined: Thu Feb 23, 2012 8:41 pm
Location: SillyCon Valley, California, and Powell, Wyoming, USA, plus The Universe
Contact: Website

Re: Are ARM based computers the future?

Sat Mar 01, 2014 3:44 am

Dougie - You should be aware that I'm older than Sputnik and am both the power supply and the clock for Babbage Difference Engine Design #2, Serial #2 at the Computer History Museum in SillyCon Valley (it's the twin of the engine in the Science Museum in London, built right next to its brother). So I completely understand what you're trying to convey, as we also have examples of Big Iron (not quite as much Iron as in a Babbage Difference Engine, though! :lol: ) in our collection and in the tours I give about my personal experiences with S/360 through 3090s. We used such systems in Naval Intelligence and the Naval Security Group via NSA when I did that for a living (or dying, as the case may be), and I worked next door to the U.S. Census Bureau where I had lunch with the geeks there all the time, swapping stories with them about each of our mainframe catastrophes.

There's parallel and then there's parallel, sorta by definition. You're assuming that there has to be one centrali(z/s)ed point for all transactions to take place, but as you will have to readily admit, no one is running all of their accounts on a single machine any more, and hasn't since the beginning of ATMs. The back side of that technology originally ran on a very specially-designed class of systems (e.g., Tandem Computers) that were not only redundant to an extreme, but were highly fault-tolerant way beyond what just mere redundancy can provide, including geographic distribution. My sister and I both have accounts with the same national-level bank, but even my local branch in California can't transfer money directly between my account and hers in New Jersey instantaneously because they're not on the same system, and the same is true in the 26 other states where that bank operates.

Banks actually still do things the old fashioned way, they batch things overnight reflecting the time-weary principle of bookkeepers "balancing the books", or in bank parlance, "reconciliation". That's why they have to post big disclaimers in their branches and on their ATMs, "Deposits made after X PM will not be reflected until after midnight of the next business day", or something similar. In fact, they rely on your ability to overdraw your account when making an ATM withdrawal just so that they can charge you a hefty fee, even when you've made a cash deposit at a branch earlier the same day. Replication today isn't just used for backup the way it's used in mainframes, it's also used for dynamic load balancing, which systems supporting massive web traffic have to contend with on a continuing basis. Mainframes make particularly bad systems for problems that, by definition, aren't centrali(z/s)ed in any way, shape or form.

Squarespace.com certainly doesn't use mainframes to guarantee that their customers' sites can't be "slashdotted" when unexpected demand suddenly appears, oh, say, like when certain mainframe-based systems supporting old-hat electronics suppliers suddenly couldn't cope with a couple of hundred thousand potential new customers clamoring for a certain single-board computer, named for a fruit-flavored pastry, at 6 AM GMT on February 29, 2012, and were still reeling for days afterward just trying to support their existing customers. E*trade/TDAmeritrade.com isn't relying on mainframes to handle billions of financial transactions for their customers' trading accounts, and neither are any of the other trading companies that weren't established as an extension of an existing bank.

Mainframes only exist because of legacy corporate lethargy, fear/uncertainty/doubt (FUD) marketing, and nothing else, as in "Nobody ever got fired for buying IBM." If it weren't for Java making it feasible to transition legacy code within them, mainframes would have died off by Y2K as customers ran, not walked toward the exits when it became obvious that a true paradigm shift needed to occur. Well, IBM just announced it's laying off 25% of its hardware workforce, so that suggests there isn't as much of a future in Big Iron as some might think. Microsoft is now riding the same kind of gradual downward slope as its legacy base slowly dribbles away because they have repeatedly failed to adapt in a timely manner. That really means accurately predicting where demand will be when new technologies are made available to customers that actually solves problems for them, and isn't just "compatible with the way we've always done it", which should also read, "compatible with the way we're going to force you to do it because it enhances our bottom line when we don't have to change anything, and who cares about what you think".

Will there still be mainframes operating in the future? Sure, we're running an IBM 1401 as a public exhibit at the museum (I helped hand desolder and replace thousands of discrete transistors that had gone bad), where you can keypunch your name into a punch card and get it sorted after the machine punches the URL for the museum (http://www.ComputerHistory.org) into the card. It's just down the hall from the Babbage Difference Engine and mere yards from the other mainframes of bygone eras (we have really cool abacuses, slide rules, and adding machines, too, that no one thought could ever be replaced). Old stuff (including me) is fun to play with, but I believe the operative phrase is, "This, too, shall pass." ;)
The best things in life aren't things ... but, a Pi comes pretty darned close! :D
"Education is not the filling of a pail, but the lighting of a fire." -- W.B. Yeats
In theory, theory & practice are the same - in practice, they aren't!!!

Heater
Posts: 13878
Joined: Tue Jul 17, 2012 3:02 pm

Re: Are ARM based computers the future?

Sat Mar 01, 2014 10:11 am

DougieLawson,

I should have asked "What is a transaction?" a bit differently.

I know almost nothing of mainframes. But this Transactions Per Second (TPS) thing seems to get bandied around a lot in that world.

Well, what actually is a transaction? How much I/O bandwidth does it require? How much code needs to be executed? How many hits on the data base? How much data needs to be fetched, updated and saved?

It seems like such a nebulous thing that talking about transactions per second is meaningless.

Or, is it so that there is actually some standardized transaction defined and used as a benchmark?

For us who are not in the mainframe/banking/database world it's impossible to relate to TPS.

I have no doubt that such machines are optimized for that TPS thing. That's great but makes them all but useless for anything else. If a mainframe were so hot the worlds supercomputers would be built out of them. That is not the case.
Memory in C++ is a leaky abstraction .

User avatar
DougieLawson
Posts: 36540
Joined: Sun Jun 16, 2013 11:19 pm
Location: Basingstoke, UK
Contact: Website Twitter

Re: Are ARM based computers the future?

Sat Mar 01, 2014 10:37 am

I used to work for a bank. I now work for IBM on zSeries mainframes. I've been doing this stuff for 32 years. They've been telling me the mainframe is dead for most of that time. The old dinosaur hasn't died yet and I don't expect it to go before I retire.

The definition of transaction can be a bit fuzzy.

But if we talk about it in terms of IMS (Information Management System) or CICS (Customer Information Control System) then the definition gets easier. The general flow for a transaction is: read input message, read database, write database, send reply message.

So with IMS that's (at least) four calls to the database and transaction manager (using Data Language/I aka DL/I), some COBOL to extract values from the input message, extract values from the existing database record, write the new database records and build the reply message. Ideally not much more than a few hundred (maybe a thousand) lines of code, which compiles to about 100K of machine code. I've seen transaction programs that tried to update every record in a database, there's always an opportunity to explain to application programmers the difference between online and batch when you see that.

From your point of view, inserting you card in an ATM probably generates one transaction (check PIN and verify credentials), requesting $10 generates another transaction (probably using scratchpad data from the first one). Requesting a cheque account balance would be another transaction (again using the verified credentials).

IMS dates back to 1969. It was developed by IBM with Rockwell & Caterpillar and used for the parts database for the NASA Apollo program. IMS put a man on the moon. Your favourite bank probably uses IMS or CICS along with Database 2 (DB2) [or another mainframe based SQL database like Oracle].

There are some "Industry Standard" benchmarks like TPC-C, but as usual with benchmarks they're mostly only good for measuring how fast you can run the benchmark. When IBM tested 100,000 transactions they were using the same basic tests from 1987 (before the TPC standard was create, when we got 1000 TPS). Those transactions were following the read message, read database, update database & send message model. In 26 years we've improved by three orders of magnitude.

The mainframes I use are a general purpose computer, just like a Raspberry Pi (in a bigger box).
Note: Having anything humorous in your signature is completely banned on this forum. Wear a tin-foil hat and you'll get a ban.

Any DMs sent on Twitter will be answered next month.

This is a doctor free zone.

KeithSloan
Posts: 321
Joined: Tue Dec 27, 2011 9:09 pm

Re: Are ARM based computers the future?

Sat Mar 01, 2014 3:31 pm

Note to Jim Manley, I think Banks in the UK may well do things differently from the USA. If I pay some money into my Bank account at the local village, by the time I get home and can check it on the Internet the Money is there. From memory I think they introduced some Banking regulation in the UK that required instant update a while back. We have no such warnings on our ATM's as far as I know.

User avatar
DrDominodog51
Posts: 79
Joined: Sun Sep 29, 2013 6:16 pm

Re: Are ARM based computers the future?

Sat Mar 01, 2014 4:27 pm

Heater wrote:jessie,

"Clock's don't define performance." True enough. That's why we have caches and pipelines and multiple ALU's and out of order execution and God knows what other optimizations in CPU design. And now multiple cores.

However Hz helps :)

I'm pretty sure that speed wall is not imaginary.

Take a look at that graph I linked to above. Notice how it is a logarithmic scale on the MHz axis. Notice how it has been pretty much a straight line up from 1972 until about 2001 where it dramatically levels out and even declines.

Had that graph continued on the path that it started in 1972 a regular PC would be up at over 100GHz today!

Clearly we are not even close. Your 3 and 4 GHz machines are a testament to that. Worse still, as you point out you have been running that for 3 years already. In that time you should have been able to replace it with a machine over twice as fast.

"Its all about IPC and core count these days."

Exactly, and that's because we can't physically get more Hz. At least until someone has come up with a serious technology change, if ever.

Throw into the mix that ARM and mobile have raised expectations of small size and low power consumption and we see Intel has no where to go except follow ARM's lead.

DrDominodog51,
Just as PowerPC faded away x86 will too.
I don't see how that is relevant. The PowerPC never had an overwhelming majority, of the computer market. It never dominated anything. One might as well have said MIPs or Sparc or Alpha.
The point is do you see a PowerPC today? I think not.

Heater
Posts: 13878
Joined: Tue Jul 17, 2012 3:02 pm

Re: Are ARM based computers the future?

Sat Mar 01, 2014 4:55 pm

DougieLawson,

Thank you for that glimpse into the mainframe world.
...extract values from the input message, extract values from the existing database record, write the new database records and build the reply message. Ideally not much more than a few hundred (maybe a thousand) lines of code,
What you are describing there could be any web server responding to HTTP requests.

Which is why I was wondering about TPS. The world of web servers deal with billions of TPS (HTTP requests) all the time. What matters is what action does that request initiate. Could be just a request for static web page, could be a complex database query.

DrDominodog51

The point is do you see a PowerPC today? I think not.
True. But then I never did. My point was "Did you ever see a PowerPC?"

Apart from a few Mac users and a couple of embedded systems I have worked on nobody knows what a PowerPC is.

In a conversation about ARM and x86 the PowerPC does not even show up on the radar.

In terms of the microprocessor economy the PowerPC does not exist.

Aside: The last time I visited IBM, a year or so ago, they were pushing a PowerPC chip. Multiple cores, on chip encryption engine, on chip XML parsing engine. Great, when can I buy one? Or a dev board?. Never did hear back from them about that.
Memory in C++ is a leaky abstraction .

User avatar
Burngate
Posts: 6091
Joined: Thu Sep 29, 2011 4:34 pm
Location: Berkshire UK Tralfamadore
Contact: Website

Re: Are ARM based computers the future?

Sat Mar 01, 2014 5:03 pm

Didn't Apple move from PowerPC to x86, not because that was better, but because it was cheaper - because x86 was shipping in larger numbers.

Return to “General discussion”