Dougie - You should be aware that I'm older than Sputnik and am both the power supply and the clock for Babbage Difference Engine Design #2, Serial #2 at the Computer History Museum in SillyCon Valley (it's the twin of the engine in the Science Museum in London, built right next to its brother). So I completely understand what you're trying to convey, as we also have examples of Big Iron (not quite as much Iron as in a Babbage Difference Engine, though!
) in our collection and in the tours I give about my personal experiences with S/360 through 3090s. We used such systems in Naval Intelligence and the Naval Security Group via NSA when I did that for a living (or dying, as the case may be), and I worked next door to the U.S. Census Bureau where I had lunch with the geeks there all the time, swapping stories with them about each of our mainframe catastrophes.
There's parallel and then there's parallel, sorta by definition. You're assuming that there has to be one centrali(z/s)ed point for all transactions to take place, but as you will have to readily admit, no one is running all of their accounts on a single machine any more, and hasn't since the beginning of ATMs. The back side of that technology originally ran on a very specially-designed class of systems (e.g., Tandem Computers) that were not only redundant to an extreme, but were highly fault-tolerant way beyond what just mere redundancy can provide, including geographic distribution. My sister and I both have accounts with the same national-level bank, but even my local branch in California can't transfer money directly between my account and hers in New Jersey instantaneously because they're not on the same system, and the same is true in the 26 other states where that bank operates.
Banks actually still do things the old fashioned way, they batch things overnight reflecting the time-weary principle of bookkeepers "balancing the books", or in bank parlance, "reconciliation". That's why they have to post big disclaimers in their branches and on their ATMs, "Deposits made after X PM will not be reflected until after midnight of the next business day", or something similar. In fact, they rely on your ability to overdraw your account when making an ATM withdrawal just so that they can charge you a hefty fee, even when you've made a cash deposit at a branch earlier the same day. Replication today isn't just used for backup the way it's used in mainframes, it's also used for dynamic load balancing, which systems supporting massive web traffic have to contend with on a continuing basis. Mainframes make particularly bad systems for problems that, by definition, aren't centrali(z/s)ed in any way, shape or form.
Squarespace.com certainly doesn't use mainframes to guarantee that their customers' sites can't be "slashdotted" when unexpected demand suddenly appears, oh, say, like when certain mainframe-based systems supporting old-hat electronics suppliers suddenly couldn't cope with a couple of hundred thousand potential new customers clamoring for a certain single-board computer, named for a fruit-flavored pastry, at 6 AM GMT on February 29, 2012, and were still reeling for days afterward just trying to support their existing customers. E*trade/TDAmeritrade.com isn't relying on mainframes to handle billions of financial transactions for their customers' trading accounts, and neither are any of the other trading companies that weren't established as an extension of an existing bank.
Mainframes only exist because of legacy corporate lethargy, fear/uncertainty/doubt (FUD) marketing, and nothing else, as in "Nobody ever got fired for buying IBM." If it weren't for Java making it feasible to transition legacy code within them, mainframes would have died off by Y2K as customers ran, not walked toward the exits when it became obvious that a true paradigm shift needed to occur. Well, IBM just announced it's laying off 25% of its hardware workforce, so that suggests there isn't as much of a future in Big Iron as some might think. Microsoft is now riding the same kind of gradual downward slope as its legacy base slowly dribbles away because they have repeatedly failed to adapt in a timely manner. That really means accurately predicting where demand will be when new technologies are made available to customers that actually solves problems for them, and isn't just "compatible with the way we've always done it", which should also read, "compatible with the way we're going to force you to do it because it enhances our bottom line when we don't have to change anything, and who cares about what you think".
Will there still be mainframes operating in the future? Sure, we're running an IBM 1401 as a public exhibit at the museum (I helped hand desolder and replace thousands of discrete transistors that had gone bad), where you can keypunch your name into a punch card and get it sorted after the machine punches the URL for the museum (http://www.ComputerHistory.org
) into the card. It's just down the hall from the Babbage Difference Engine and mere yards from the other mainframes of bygone eras (we have really cool abacuses, slide rules, and adding machines, too, that no one thought could ever be replaced). Old stuff (including me) is fun to play with, but I believe the operative phrase is, "This, too, shall pass."