We're approaching the edge of yet-another technological canyon of a type that's been encountered before in computing history. The very earliest computers were hard-wired (by hand) to perform only a specific function, such as the bombes used at Bletchley Park to assist in analyzing, discovering and exploiting vulnerabilities in, and bulk-decrypting Morse code military and diplomatic messages that were encrypted using a variety of techniques and technologies. These ranged from truly-random one-time pads (that can still be impossible to break using the most capable high-performance computing systems on the planet) to moving-rotor machines such as the German Enigma, that the special-purpose bombes and Colossus system were developed to counter using machine-assisted, but, still somewhat manual processing. Special-purpose computers were also built to assist in calculating ballistics tables for military artillery and naval gunfire systems, and other domains.
After WW-II ended, computing efforts were expanded to more general scientific and business domains, since hand-built circuits cost millions of dollars in today's money for what was much less powerful than what's in any consumer device today. They required large teams of maintenance and operations technicians to keep them running, adding to their overall cost. The name of the game was to cram as much processing power into a system as possible and run data through it every moment of the day and night, year-round, in non-interactive batch mode. There were still special-purpose systems developed and operated, but, they were far outnumbered by the general-purpose systems because that's how the relative needs were distributed. By the 1960s, when the number of general-purpose computer manufacturers in the U.S. alone dropped from over 250 to less than 10, most computing development efforts were focused on doing as much as possible on centralized systems made ever more capable.
Minicomputers were developed to reduce the total cost of particular systems, even though their performance/cost ratios were well below those of the larger mainframes. The key was that, instead of only being able to lease part of a mainframe including the required staff hours, an organization could outright own, program, and operate a system to do whatever it wanted on their own budget and schedule. This was particularly attractive to scientific and engineering groups that already had technically-sophisticated people on their payrolls capable of performing computing system maintenance, programming, and operation.
When microcomputers first appeared, their performance/cost ratios were even worse than those for minicomputers (which were improving), but, again, the customer, now an individual, could afford the total cost of a system. Their demands weren't that great, anyway, with lightweight office productivity, games, hobby applications, etc., being their major usage domains.
Inevitably, as semiconductor and storage technology advanced in density and speed, the lines between the various computing strata began to blur, and capabilities increased while costs decreased exponentially across the board. Eventually, low-end mainframes, minicomputers, and high-end microprocessor-based servers morphed to occupy the same market, and high-end mainframes migrated up toward the high-performance computing domains (aka supercomputers). Desktop personal computers continued to improve in performance/cost to the point where they have largely been eclipsed by laptop systems for routine uses, and mobile devices are beginning to move up into low-end laptop and even desktop markets.
The problem for the manufacturers of higher-end microprocessors, such as Intel, is that they've developed products that exceed the performance needs of most individuals and even many smaller businesses. Many of the techniques that have been developed for high-end computing products can also be applied to most of the products in all other markets (e.g., steadily-improving circuit density that leads to GBs of RAM and flash memory even in the lowest-end mobile devices), including ARM.
One of the differences between the marketing of desktop/laptop systems and mobile products is that the mobile variety tends to be paid for as part of a services package, e.g., phone and data. A typical smartphone doesn't cost $199, its real price is around $2,600 when the average $100 per month, 24-month service contract is properly included, and many people continue to pay beyond the 24-month period which increases price and profits that much more. Of course, the cost of providing services is mixed in, but, since unlocked mobile devices are typically priced at $800, or more, we can assume that's somewhere above their actual cost.
So, the price differential between commercial product categories isn't as large as it might seem. The Pi is not representative of typical commercial systems because much of the cost that would need to be paid for in commercial systems has been eliminated by near-cost component prices (which are also a version behind current state-of-the-art in commercial products), volunteer engineering and marketing labor, and minimal packaging costs, with assistance from the very medium you're using to read this - the Internet. Additionally, relying on open-source software and further volunteer efforts to adapt what can't just be recompiled saves a huge amount of money.
You generally get what you pay for - the trick is to only pay as little as possible for what you get.
The best things in life aren't things ... but, a Pi comes pretty darned close!
"Education is not the filling of a pail, but the lighting of a fire." -- W.B. Yeats
In theory, theory & practice are the same - in practice, they aren't!!!