User avatar
diereinegier
Posts: 166
Joined: Sun Dec 30, 2012 5:45 pm
Location: Bonn, Germany
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Tue Jan 01, 2013 9:47 pm

Wendo wrote:I don't want to ruin the fun here but there are some factual errors here that I think need correcting, if only so someone with no knowledge doesn't take them as fact.

...

Now, in saying that don't feel discouraged. Even though there is unlikely to be any useful purpose if you do get a cluster running, it'll still be damn fun to try :)
I very much agree to all that and despite of this I will set up my little cluster - or cluster module that is - of 4 Pis and a 5port switch. So no spoiled fun here!
Download my repositories at https://github.com/GeorgBisseling

User avatar
yv1hx
Posts: 372
Joined: Sat Jul 21, 2012 10:09 pm
Location: Zulia, Venezuela
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Wed Jan 02, 2013 1:35 am

jardino wrote:Node 3 (a.k.a. Slave02) of my Beowulf was installed with only one slight difficulty - I'd cloned its SD card from my Master image, rather than from my Slave01 image, so it declined to accept MPI requests. Repeating the last item in Step 33 of the "Southampton" instructions quickly fixed this.

Perhaps Node 4 (when it arrives from Farnell) will install immediately in a "plug 'n' play" fashion?

The message here - "learning about learning" - is that only 2 Beowulf nodes are needed to get you through the heavy initial learning curve. After that, things should just get smoother.

Three RPis dangling off the back of my BT Home Hub - and using up all of its ports - eventually persuaded me to give my Beowulf a decent home, so I bought a cheap plastic storage box from Tesco and put my 3 RPis (in their ModMyPi cases) into it, together with my 8-port switch and a power block. See photo, which also shows an empty case waiting for its RPi.

It's interesting, I think, that there are only two cables emanating from my Beowulf box - one for power and the other for delivering computational results to the world!

On my journey thus far, I've uncovered a fair amount of on-line resources about parallel processing, MPI and demonstration programs. I'm happy to share this research here, if anyone is interested ...

Alan.
Alan,

Also like you I'm trying to get beyond the point where Professor Cox left, but I'm unable to successful compile a single example over tons that I've downloaded, could you kindly post some of your code examples?

Thanks in advance,
Marco-Luis
Telecom Specialist (Now Available for Hire!)

http://www.meteoven.org
http://yv1hx.ddns.net
http://twitter.com/yv1hx

jardino
Posts: 129
Joined: Wed Aug 08, 2012 9:03 am
Location: Aberdeenshire, Scotland

Re: Cluster (Bramble...) Design Discussion (Advanced)

Wed Jan 02, 2013 11:09 am

Hello Marco-Luis:

I'll be delighted to post my examples.

However, there will be a short delay.

First, my Beowulf is currently down. Not for any technical reason, but because I keep cannibalising it to build other interesting RPi projects. Time to buy more Pi, I think!

Secondly, my keyboard and mouse keep getting commandeered by my visiting 15-month-old granddaughter - she's getting into computing at a very early age :D .

Good to see that this forum is becoming active again!

Alan.
IT Background: Honeywell H2000 ... CA Naked Mini ... Sinclair QL ... WinTel ... Linux ... Raspberry Pi.

User avatar
yv1hx
Posts: 372
Joined: Sat Jul 21, 2012 10:09 pm
Location: Zulia, Venezuela
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Wed Jan 02, 2013 6:06 pm

jardino wrote:Hello Marco-Luis:

I'll be delighted to post my examples.

However, there will be a short delay.
Hello Allan, Happy New Year,
jardino wrote:First, my Beowulf is currently down. Not for any technical reason, but because I keep cannibalising it to build other interesting RPi projects. Time to buy more Pi, I think!

Secondly, my keyboard and mouse keep getting commandeered by my visiting 15-month-old granddaughter - she's getting into computing at a very early age :D .
I hope you can recover your assets soon :D
jardino wrote:Good to see that this forum is becoming active again!

Alan.
I have a long time watching your posts, but the later months has been very complicated in this corner of the world (well I have to accept the fact that I born and live in a very complicated country :cry: )

Also, I was spending some time trying to install the Weather Forecasting Model http://www.mmm.ucar.edu/wrf/OnLineTutorial/index.htm - with unsuccessful results BTW -

Best regards,
Marco-Luis
Telecom Specialist (Now Available for Hire!)

http://www.meteoven.org
http://yv1hx.ddns.net
http://twitter.com/yv1hx

jardino
Posts: 129
Joined: Wed Aug 08, 2012 9:03 am
Location: Aberdeenshire, Scotland

Re: Cluster (Bramble...) Design Discussion (Advanced)

Fri Jan 04, 2013 5:33 pm

OK - grand-daughter gone home :( and Beowulf restored to two working nodes :) , so here goes.

The following assumes that you have followed the U. of Southampton instructions - including installing the Fortran compiler - and got as far as running the cpi program on two nodes.

I'm focussing on Fortran in this post because I've been working more with that language than C. I'll do a separate post for C later.

1) Go to the folder where you downloaded and extracted the mpich2 sources from Argonne.

2) Go to the extracted folder. In my case, this is mpich2-1.5, which is a later version than Southampton's.

3) Go to the "examples" folder within this. Here you will find the source program for various versions of cpi, in C and Fortran, and many other source code examples. (The Fortran sources are in sub-directories f77 and f90.)

Here is the code for pi3f90.f90, in case you can't find it:

Code: Select all

!*****************************************************************
!  pi390.f - compute pi by integrating f(x) = 4/(1 + x**2)
!
!  (C) 2001 by Argonne National Laboratory.
!      See COPYRIGHT in top-level directory.
!
!   Each node:
!    1) receives the number of rectangles used in the approximation.
!    2) calculates the areas of it's rectangles.
!    3) Synchronizes for a global summation.
!   Node 0 prints the result.
!
!  Variables:
!
!    pi  the calculated result
!    n   number of points of integration.
!    x           midpoint of each rectangle's interval
!    f           function to integrate
!    sum,pi      area of rectangles
!    tmp         temporary scratch space for global summation
!    i           do loop index
!****************************************************************************
program main

 use mpi

 double precision  PI25DT
 parameter        (PI25DT = 3.141592653589793238462643d0)

 double precision  mypi, pi, h, sum, x, f, a
 integer n, myid, numprocs, i, rc
!                                 function to integrate
 f(a) = 4.d0 / (1.d0 + a*a)

 call MPI_INIT( ierr )
 call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
 call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
 print *, 'Process ', myid, ' of ', numprocs, ' is alive'

 sizetype   = 1
 sumtype    = 2

 do
    if ( myid .eq. 0 ) then
       write(6,98)
 98    format('Enter the number of intervals: (0 quits)')
       read(5,99) n
 99    format(i10)
    endif

    call MPI_BCAST(n,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

!                                 check for quit signal
    if ( n .le. 0 ) exit

!                                 calculate the interval size
    h = 1.0d0/n

    sum  = 0.0d0
    do i = myid+1, n, numprocs
       x = h * (dble(i) - 0.5d0)
       sum = sum + f(x)
    enddo
    mypi = h * sum

!                                 collect all the partial sums
    call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0, &
                    MPI_COMM_WORLD,ierr)

!                                 node 0 prints the answer.
    if (myid .eq. 0) then
        write(6, 97) pi, abs(pi - PI25DT)
 97     format('  pi is approximately: ', F18.16, &
               '  Error is: ', F18.16)
    endif

 enddo

 call MPI_FINALIZE(rc)
 stop
end
4) Copy the source code for pi3f90.f90 to a folder of your choice. You can compile from the original folder, of course, but I prefer to make a copy to tinker with, while preserving the original.

5) Don't waste time trying to compile with the standard Fortran compiler - it won't work. (This misunderstanding cost me a day or two!)

6) cd to the mpi_testing folder that was set up during the Southampton installation process.

7) Do: mpif90 -o pi3f90 ~/(your-directory)/pi390.f90

8) A return to your prompt without error messages indicates that the source program has successfully found the MPI libraries and has compiled successfully.

9) Edit machinefile to contain the ip address of your master node.

10) Do: mpiexec -f machinefile -n 1 ~/mpi_testing/pi3f90

11) Enter the required parameter and it runs!

12) To run on multiple nodes, you need to copy the object file to your various RPi's. Assuming that each has the same folder structure and SSH is running, do:

scp pi390 (node's ip address):/home/pi/mpi_testing

(You could also copy over the source code, but I prefer to keep a single copy on the master node -
with local backup - to manage version control of the code.)

13) Edit machinefile appropriately and set n in the mpiexec command line to calculate pi with parallel processes!

Let me know if the foregoing procedure works and I'll replicate it for the C version of the program - and also explain what I've learned about MPICH2 procedure calls!

Regards,
Alan.
IT Background: Honeywell H2000 ... CA Naked Mini ... Sinclair QL ... WinTel ... Linux ... Raspberry Pi.

User avatar
diereinegier
Posts: 166
Joined: Sun Dec 30, 2012 5:45 pm
Location: Bonn, Germany
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Sat Jan 12, 2013 9:53 am

I want to suggest an alternative to copy MPI binaries around since this is very tedious and error prone.

I let all my nodes export their root fs via NFS and the option no_hide to reexport all mounts.

The nodes are called crumb0 to crumb3. I set up the NFS mounts in a way to have the directories
/net/crumb0
/net/crumb1
/net/crumb2
/net/crumb3
everywhere. The tricky bit is to get the root directory of crumbN to reappear as /net/crumbN on the node itself without actually using NFS. The man page for mount explains how to achive this with the "bind" mount option.

If you now work not in your normal home directory, say /home/itsme, but in the directory /net/crumb0/home/itsme then your MPI programs can be started right from there since this directory appears under the same name on each node.

This is not really a substitute for a network filesystem and it may not scale to a bigger cluster, but it works!
Download my repositories at https://github.com/GeorgBisseling

User avatar
yv1hx
Posts: 372
Joined: Sat Jul 21, 2012 10:09 pm
Location: Zulia, Venezuela
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Tue Jan 15, 2013 1:09 am

jardino wrote:OK - grand-daughter gone home :( and Beowulf restored to two working nodes :) , so here goes.

The following assumes that you have followed the U. of Southampton instructions - including installing the Fortran compiler - and got as far as running the cpi program on two nodes.

I'm focussing on Fortran in this post because I've been working more with that language than C. I'll do a separate post for C later.

1) Go to the folder where you downloaded and extracted the mpich2 sources from Argonne.

2) Go to the extracted folder. In my case, this is mpich2-1.5, which is a later version than Southampton's.
In my case was mpich2-1.4.1p1
jardino wrote:3) Go to the "examples" folder within this. Here you will find the source program for various versions of cpi, in C and Fortran, and many other source code examples. (The Fortran sources are in sub-directories f77 and f90.)

Here is the code for pi3f90.f90, in case you can't find it:

Code: Select all

!*****************************************************************
!  pi390.f - compute pi by integrating f(x) = 4/(1 + x**2)
!
!  (C) 2001 by Argonne National Laboratory.
!      See COPYRIGHT in top-level directory.
!
!   Each node:
!    1) receives the number of rectangles used in the approximation.
!    2) calculates the areas of it's rectangles.
!    3) Synchronizes for a global summation.
!   Node 0 prints the result.
!
!  Variables:
!
!    pi  the calculated result
!    n   number of points of integration.
!    x           midpoint of each rectangle's interval
!    f           function to integrate
!    sum,pi      area of rectangles
!    tmp         temporary scratch space for global summation
!    i           do loop index
!****************************************************************************
program main

 use mpi

 double precision  PI25DT
 parameter        (PI25DT = 3.141592653589793238462643d0)

 double precision  mypi, pi, h, sum, x, f, a
 integer n, myid, numprocs, i, rc
!                                 function to integrate
 f(a) = 4.d0 / (1.d0 + a*a)

 call MPI_INIT( ierr )
 call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
 call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
 print *, 'Process ', myid, ' of ', numprocs, ' is alive'

 sizetype   = 1
 sumtype    = 2

 do
    if ( myid .eq. 0 ) then
       write(6,98)
 98    format('Enter the number of intervals: (0 quits)')
       read(5,99) n
 99    format(i10)
    endif

    call MPI_BCAST(n,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

!                                 check for quit signal
    if ( n .le. 0 ) exit

!                                 calculate the interval size
    h = 1.0d0/n

    sum  = 0.0d0
    do i = myid+1, n, numprocs
       x = h * (dble(i) - 0.5d0)
       sum = sum + f(x)
    enddo
    mypi = h * sum

!                                 collect all the partial sums
    call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0, &
                    MPI_COMM_WORLD,ierr)

!                                 node 0 prints the answer.
    if (myid .eq. 0) then
        write(6, 97) pi, abs(pi - PI25DT)
 97     format('  pi is approximately: ', F18.16, &
               '  Error is: ', F18.16)
    endif

 enddo

 call MPI_FINALIZE(rc)
 stop
end
4) Copy the source code for pi3f90.f90 to a folder of your choice. You can compile from the original folder, of course, but I prefer to make a copy to tinker with, while preserving the original.

5) Don't waste time trying to compile with the standard Fortran compiler - it won't work. (This misunderstanding cost me a day or two!)

6) cd to the mpi_testing folder that was set up during the Southampton installation process.

7) Do: mpif90 -o pi3f90 ~/(your-directory)/pi390.f90

8) A return to your prompt without error messages indicates that the source program has successfully found the MPI libraries and has compiled successfully.

9) Edit machinefile to contain the ip address of your master node.

10) Do: mpiexec -f machinefile -n 1 ~/mpi_testing/pi3f90

11) Enter the required parameter and it runs!
screen2.png
screen2.png (29.86 KiB) Viewed 6269 times
jardino wrote:12) To run on multiple nodes, you need to copy the object file to your various RPi's. Assuming that each has the same folder structure and SSH is running, do:

scp pi390 (node's ip address):/home/pi/mpi_testing

(You could also copy over the source code, but I prefer to keep a single copy on the master node -
with local backup - to manage version control of the code.)

13) Edit machinefile appropriately and set n in the mpiexec command line to calculate pi with parallel processes!

Let me know if the foregoing procedure works and I'll replicate it for the C version of the program - and also explain what I've learned about MPICH2 procedure calls!

Regards,
Alan.
Alan, A image is worth a thousand words, see below:
screen2.png
screen2.png (29.86 KiB) Viewed 6269 times
Attachments
screen1.png
screen1.png (27.48 KiB) Viewed 6269 times
Marco-Luis
Telecom Specialist (Now Available for Hire!)

http://www.meteoven.org
http://yv1hx.ddns.net
http://twitter.com/yv1hx

jardino
Posts: 129
Joined: Wed Aug 08, 2012 9:03 am
Location: Aberdeenshire, Scotland

Re: Cluster (Bramble...) Design Discussion (Advanced)

Wed Jan 16, 2013 2:25 pm

Very good, Marco-Luis.

I assume this was the Fortran program?

Let me know if you need help getting the C version to compile. However, I'm not very strong in C.

Alan.
IT Background: Honeywell H2000 ... CA Naked Mini ... Sinclair QL ... WinTel ... Linux ... Raspberry Pi.

User avatar
yv1hx
Posts: 372
Joined: Sat Jul 21, 2012 10:09 pm
Location: Zulia, Venezuela
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Thu Jan 17, 2013 9:58 pm

jardino wrote:Very good, Marco-Luis.

I assume this was the Fortran program?

Let me know if you need help getting the C version to compile. However, I'm not very strong in C.

Alan.
Yes Alan, was the Fortran program.

BTW, I´m not precisely a skilled C or Fortran programmer ;)

I don´t not why the images are so small after I posted it :cry:
Marco-Luis
Telecom Specialist (Now Available for Hire!)

http://www.meteoven.org
http://yv1hx.ddns.net
http://twitter.com/yv1hx

User avatar
diereinegier
Posts: 166
Joined: Sun Dec 30, 2012 5:45 pm
Location: Bonn, Germany
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Thu Jan 17, 2013 10:28 pm

yv1hx wrote:
I don´t not why the images are so small after I posted it :cry:
That did confuse me too!

Right click on the image to open it in original size!
Download my repositories at https://github.com/GeorgBisseling

User avatar
yv1hx
Posts: 372
Joined: Sat Jul 21, 2012 10:09 pm
Location: Zulia, Venezuela
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Thu Jan 17, 2013 10:40 pm

diereinegier wrote:
yv1hx wrote:
I don´t not why the images are so small after I posted it :cry:
That did confuse me too!

Right click on the image to open it in original size!
LOL! good trick diereinegier!
Marco-Luis
Telecom Specialist (Now Available for Hire!)

http://www.meteoven.org
http://yv1hx.ddns.net
http://twitter.com/yv1hx

jardino
Posts: 129
Joined: Wed Aug 08, 2012 9:03 am
Location: Aberdeenshire, Scotland

Re: Cluster (Bramble...) Design Discussion (Advanced)

Sat Jan 26, 2013 11:32 am

Congratulations on getting your cluster running! I'm back up to 4 nodes again.

I am interested in going through your MPI tutorial but have failed at the first hurdle - compiling the apple_serial.c program. I get compiler messages like "undefined references to initRanges / XPIX / YPIX" and so on (11 in all).

I am trying to compile with mpicc and have invar.h in the same folder as the source program.

I am obviously doing something silly, but what?

Regards,
Alan.
IT Background: Honeywell H2000 ... CA Naked Mini ... Sinclair QL ... WinTel ... Linux ... Raspberry Pi.

User avatar
diereinegier
Posts: 166
Joined: Sun Dec 30, 2012 5:45 pm
Location: Bonn, Germany
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Sat Jan 26, 2013 2:17 pm

Assuming you have a standard MPI installation just change the Makefile as follows:

Code: Select all

# Set MPIHOME in your environment or here
MPIHOME=/usr

CC              = gcc
MPICC           = $(MPIHOME)/bin/mpicc
MPICCFLAGS      = -O3 -Wall -std=gnu99
And then type "make" to see how its build!

You could however use this single line to build apple_serial:

Code: Select all

mpicc -O3 apple_serial.c invar.c -o apple_serial
The reason is that invar.c contains the parts of the code that are INVARiant through all the incremental steps. You either have to compile it together with the main program like above or you have to compile it into an object file invar.o and link that with your program. The latter approach is used in the Makefile to save time.
Download my repositories at https://github.com/GeorgBisseling

jardino
Posts: 129
Joined: Wed Aug 08, 2012 9:03 am
Location: Aberdeenshire, Scotland

Re: Cluster (Bramble...) Design Discussion (Advanced)

Sun Jan 27, 2013 11:38 am

You could however use this single line to build apple_serial:

Code: Select all

mpicc -O3 apple_serial.c invar.c -o apple_serial
Thanks for that - it worked!
Here are my results using the "time" parameter:

real 0m46.164s
user 0m45.950s
sys 0m0.070s.

Onwards and upwards!

(Just curious: why do you refer to "mandelbrot" as "apple" in Germany?)

Thanks and regards,
Alan.
IT Background: Honeywell H2000 ... CA Naked Mini ... Sinclair QL ... WinTel ... Linux ... Raspberry Pi.

User avatar
diereinegier
Posts: 166
Joined: Sun Dec 30, 2012 5:45 pm
Location: Bonn, Germany
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Sun Jan 27, 2013 11:50 am

jardino wrote: (Just curious: why do you refer to "mandelbrot" as "apple" in Germany?)
Good to hear that someone actually not only downloads the stuff but tries to work it out!

Well, if you look at the central structure in the Mandelbrot-Set with the positive X- or real axis pointing down then the figure looks a little like two apples stacked onto each other very much like a snow man made from snow balls. So the name Apfelmännchen or Apple-Man came up and stuck.
Download my repositories at https://github.com/GeorgBisseling

thinktankted
Posts: 1
Joined: Tue Sep 17, 2013 3:40 pm

Re: Cluster (Bramble...) Design Discussion (Advanced)

Tue Sep 17, 2013 3:45 pm

What about avoiding network cable clutter entirely and go with stubby Wireless N 150 Mbps adapters...I know the throughput wouldn't be 150 Mbps...but could you eliminate the router by using an ad-hoc point to point network? Or maybe dedicate one of the Pis as a router?

User avatar
diereinegier
Posts: 166
Joined: Sun Dec 30, 2012 5:45 pm
Location: Bonn, Germany
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Tue Sep 17, 2013 8:47 pm

Ultimately the wired ethernet passes through USB 2.0 so this bottleneck is the same for both options.
And 4 (or more) WLAN-Devices would have to time-share the "air".
Additionally you suggest another router that would serialize all traffic plus adding additional overhead over mere switching.

I do not see any advantages in your suggestion. Maybe I am missing a bit?
Download my repositories at https://github.com/GeorgBisseling

Marianne
Posts: 4
Joined: Sun Oct 13, 2013 12:34 am

Re: Cluster (Bramble...) Design Discussion (Advanced)

Sun Oct 13, 2013 1:57 am

willanth wrote:Hi All, just been reading this discussion and had some suggestions..

I'm new to HPC and to linux administration in general.  I'm a hardware guy mostly.

I wanted to address the question of a power supply for a "bramble" cluster of b-pi.. my suggestion would be to actually do the work, and lay out a power supply PCB.
...
Cheers,

Will
Hi All,
I am just starting on a Bramble cluster design. Thanks for all the excellent posts and information.

I am going for something around the Beowulf cluster size that can be separated for smaller projects if required.

So about the problem of power supply, I have looked at the options of several USB Hubs, or the DIY power supply. Most USB Hubs with many ports seem to be underpowered, and the DIY power supply has some risks if not done properly. I would prefer an off the shelf solution and have found the USBgear Industrial 16port rack mounted option, a bit pricey but should do the job. I also found a power board company (Kopi) that has developed an 8 port USB power strip that can supply 96W (currently on the indiegogo crowdfunding site), more than enough for 8 RasPi's. Alternatively I could go for the new PiHUB for smaller clusters of 4 RasPi's.

Does anyone know of other power supply alternatives that are available off the shelf?
Regards,
Marianne.

Marianne
Posts: 4
Joined: Sun Oct 13, 2013 12:34 am

Re: Cluster (Bramble...) Design Discussion (Advanced)

Tue Oct 15, 2013 1:36 am

Information from the distributor of the USBgear Industrial 16port rack mounted option. Although it can provide 1.5A per port the device needs to request more than the default 100mA.
"USB 2.0 device will be served 500mA if it correctly asks for it, and a USB 3.0 device will get 900mA, again if it correctly asks for it"

My understanding (please correct me if wrong) is that the Raspberry Pi USB power socket dose not have sync, and only has the power pins connected. I would assume that it cannot request the amount of power that it needs.

I have decided to test the Kopi 96W 8 port USB power strip. Will post results when available.

Does anyone know of other power supply alternatives that are available off the shelf?
Regards,
Marianne.

User avatar
ab1jx
Posts: 868
Joined: Thu Sep 26, 2013 1:54 pm
Location: Heath, MA USA
Contact: Website

Re: Cluster (Bramble...) Design Discussion (Advanced)

Sat Sep 30, 2017 3:15 am

Fafler wrote:
Mon Feb 13, 2012 4:15 pm
What about Bitcoin mining? I don't think at has been discussed before, but a central concept in Bitmining is machines collaborating in mining pools. The network requirements are low as each client downloads a chunk of data and work on it for some time, the same way as [email protected], compared to applications where clients need to exchange data near-realtime.

You are paid by the Bitcoin network itself for mining coins, and it actually seems some people are able to earn back the price of expensive ATI GPU's by using them for mining.

If the Rasberry in any way has the raw CPU power to compete with GPU's, or even better, if becomes possible to do GPGPU on the Rasberry GPU at some point, it could be really interesting to pay the cost of a Bramble by mining.

ARM benchmarks are here: https://en.bitcoin.it/wiki/Mining_hardw ... arison#ARM
This is 5 years old but probably not. If you look at them in terms of Mhash/j no CPU comes close to the efficiency of what an ASIC can do. At viewtopic.php?f=63&t=186721&p=1178606&h ... r#p1178606 I measured a Pi on all 4 threads as doing 1.84 khash/sec on 1.577 watts (probably Litecoin). ASICs do giga or terahashes but they do eat a lot of electricity. Scaling to a dozen Pis isn't going to help. Running on solar power is about the only way to make a profit. Speaking of which I'm looking at adding a Beaglebone Black because it comes with A/D converters enough to monitor battery charging rates.

mandolin666
Posts: 4
Joined: Mon Aug 27, 2012 10:11 am

Re: Cluster (Bramble...) Design Discussion (Advanced)

Mon Apr 30, 2018 9:28 pm

I'm very new to this idea of building a small cluster and I'm still waiting for the delivery of my first two Pis, an ethernet switch and some cabling. Would it be an advantage to install the Lite version of Raspbian for all but the head node? It sounds a bit grand to say all but the head node considering I'll only have two nodes to start with but it's an eight port switch so there's room for six more before I need to get a bigger one!
One of the videos I watched suggested using Python as there is support for MPI but surely a compiled language would be faster... or am I totally barking up the wrong tree?

Return to “Other projects”