Don't know what its called. Cluster?
Someone told me about putting many pis together to form a single PC.
What's that called?
How's it done?
That's why my Con Reg system uses 2 Cubieboards with a replicated database. (And I'm using Cubieboards because the have a SATA port...if a version of the Pi came out with a SATA port, I'd generate Cerenkov radiation on my way to switch over to them.)Heater wrote: Yes you can use clustering to make a somewhat more fault tolerant system.
To me it sounds like the answer is that you better have a useful purpose for the cluster. From what I've read, there are very few practical uses for Pi clusters. In general, a cluster is not going to help you run more demanding applications. It doesn't give you one powerful single PC. If you have some workload that can be handled on a Pi already and distributed to other Pis, it might be useful to you.solar3000 wrote:I have a feeling it sounds like the answer is "too damn hard" and not a weekend project.
Sounds like you are int o the Byzantine failure problem. The upshot of that is that if you want to tolerate N points of failure you need 3N + 1 nodes in your system, all fully interconnected connected. So a single point of failure requires 4 machines.The long term project is to eliminate as many single points of failure as possible.
I should have said "where practical". However I believe that the implication is that I am aware that there are going to be single points of failure that I can't do anything about, though I might be able to mitigate them somewhat. One prime example is that I have no control over the power utility (PG&E), and since it's not practical to either bring my own generator or run off batteries for the entire run of the convention, if the power goes out (and that has happened), I have enough UPS capacity to determine if it's a short period problem or a longer one, and for the longer period shut down in a controlled manner.Heater wrote:W. H. HeydtSounds like you are int o the Byzantine failure problem. The upshot of that is that if you want to tolerate N points of failure you need 3N + 1 nodes in your system, all fully interconnected connected. So a single point of failure requires 4 machines.The long term project is to eliminate as many single points of failure as possible.
http://en.wikipedia.org/wiki/Byzantine_fault_tolerance
Of course you had better be sure each of your four nodes is on a different planet in case one planet blows up
Bear in mind that a red laser pointer can also cause the reset of a Pi2B the way a Xenon flash tube does.paulie wrote:I have read the examples of failures in reference 8 in the above link,
( https://c3.nasa.gov/dashlink/resources/624/ )
and they are fascinating.
Might the 'Xenon Death Flash' be a candidate for addition to the list?
If so, how would this be done?