I, your humble programmer, who shares with you the distress, bugs and the debugging we have with computers, found myself sitting in front of a system called the Pi because I coded a big-number multiplication and produced the one-million digit Fibonacci number. I was caught up in the Karatsuba shuffle and heard behind me a voice as loud as an active cooling solution, which said, "Write a new program and run it on seven Raspberry Pi computers: the B, the B+, the 2B, the 3B, the Zero, the 3B+ and the 4B."
After weeks of traveling I was too tired to visit the Pi store and buy a 4B, so instead returned to the liberal frontier just in time for the explosion
. While I pondered the new Windows text-editor challenge and new windows in general, I looked to the sky and noted the phase of the moon had indeed changed. Upon consulting the seven titles
, I saw that "Project Digital Apocalypse Not Now" was next and everything suddenly made sense.
When user-programmable personal computers went out of fashion at the end of the golden age, they were replaced by office computers that ran shrink-wrapped software, then tablets, smart phones and chrome books which only ran digitally-signed software purchased directly from a vendor-specific online marketplace.
In the same way Luke Skywalker was Last Hope for the Galaxy, so too the Raspberry Pi. The second age of personal computing, for which this thread was initially named, was conceived in 2006 with a prototype based on the ATmega644 microcontroller and became reality in 2011 with the release of the original Pi B based on the BCM2835. As in the science fiction franchise, the future of the Rebel Alliance appeared doomed until the appearance of the Pi 4B led to a sudden victory in the Clone Wars.
But was this victory similarly hollow? Could the involved development cycle have allowed deep-learning neural networks to influence things behind the scenes as happened with the Dark Side and the Old Republic? Did any of the new hardware have hidden support for the half-precision 16-bit floating-point formats which powered those dangerous artificial intelligences? If so, the spinning bits of the digital apocalypse may already be upon us without anyone having noticed.
I carefully checked the Cortex-A72. That troublesome half-precision floating-point format was not there. However, in the midst of those processors cores I saw one like a GPU, wearing an ankle length wafer with a silicon sash around his interconnect. His transistors were white as hydrogen corrosion, the microcode like the sound of flushing water and in his QPUs were held a multitude of half-precision floating-point units.
After the second age of personal computing, not only were computers not user programmable, but neither were they human programmable. A sure sign of the digital apocalypse is when all software is created by emotional but powerful artificial intelligences. In a way different than how the Jedi confused the clones for the real enemy, the dangers of widely available half-precision hardware suitable for constructing neural networks was overlooked by the hobbyist and maker communities.
Before moving on, if that is even possible, I wanted to post the scripts used for generating the graphs which depict the results of the Fibonacci roundups that appear in this thread.
Recall fibench functions by means of rudimentary remote-procedure calls to a separately running Fibonacci calculator program. This provides a uniform way to measure the speed at which different codes run without contaminating the results with load-link time, startup or JIT compiler warmup. In this way three output files with names such as classic-1.out, classic-2.out and classic-3.out were created for each Fibonacci code. After doing this for five different Fibonacci codes the resulting fifteen files were preprocessed by a simple shell script and then graphed using gnuplot.
One of the representative shell scripts looked like
Code: Select all
data='"classic-?.out" "visual-?.out" "integervol-?.out" "serial-?.out" "gmp-?.out"'
color='0x00CF0000 0x009F009F 0x00008F00 0x000000FF 0x00505050'
for i in $data
rgb=`echo $color | cut -d " " -f $n`
f=`eval echo $i`
for j in $f
cat $j | while read s
echo $s $rgb
done | shuf >zerotime.dat
Note that the above script creates the file zerotime.dat that contains a concatenation of the fifteen output files with a third column added to indicate color. As the data points will be graphed as a scatter plot, the last thing the script does is shuffle the order of the lines.
The gnuplot script looks like
Code: Select all
set terminal pngcairo enhanced color font "Courier-Bold" ps 2 size 1.8*5in,1.8*3.5in background rgb "0xFFFFFFFF"
set output 'zerotime.png'
set key left
set title "Measured Time Complexity of Fibonacci Codes"
set ylabel "seconds"
set xlabel "n"
set logscale x
set logscale y
set style data points
plot [1:] [0.0001:1000] \
"empty.dat" pt 7 ps 1 lc rgb "0x00CF0000" title "classic",\
"" pt 7 ps 1 lc rgb "0x009F009F" title "visual",\
"" pt 7 ps 1 lc rgb "0x00008F00" title "integervol",\
"" pt 7 ps 1 lc rgb "0x000000FF" title "serial",\
"" pt 7 ps 1 lc rgb "0x00505050" title "gmp",\
"zerotime.dat" using 1:2:3 pt 7 ps 0.2 lc rgb variable ti "",\
"zerotime.dat" using 1:2:($3+0xF0000000) pt 7 ps 0.2 lc rgb variable ti ""
Note that the data for the scatter plot is graphed twice, the second time with transparency for a slightly more artistic effect. This works for version 5.2 of gnuplot. Older versions may not have the same pngcairo output driver or transparency channels.