I got special permission to write this, and James has kindly agreed to use his supar-mod-powarz to post it. Unfortunately, our ex-friend managed to get the thread killed before I could respond to some of his assertions, and it seemed worthwhile to nip them in the bud before they're used to fuel the next "Wah! Wah! Wah! Why won't you implement Windows" thread.
I should note that I am in no way representing the Foundation, merely pointing out some misunderstandings and fallacies as I see them.
[Admin' note. I wouldn't normally do this, but the points originally made, and Simon's replies are particularly relevant, and indeed quite interesting and well informed, which is why his post has been added after the thread lock]
The original NT kernel was quite nice, but at around the NT 4.0 mark, it started going off the rails, with an awful lot of stuff being brought into kernel space "for performance reasons".
I do believe – if I am not mistaken – that the Linux kernel is monolithic, and that means however much NT has in the kernel, there simply can't be any more than what Linux currently does.
The linux kernel is, indeed, monolithic. Unfortunately, the rest of your assertion is totally wrong.
In kernel terms, what "monolithic" actually means is that drivers run in kernel space, rather than being separate user-space processes, as they would be in a purely microkernel architecture. In terms of "amount of code required" and "amount of code running", of course, this is irrelevant - the drivers need to run no matter what if you want your hardware to work. The main difference between the two approaches is the tradeoff between performance and stability; while a monolithic kernel runs like a doped whippet, it is also subject to broken drivers bringing down the entire system.
The NT kernel circa NT3.5 was a fairly pure microkernel, but with NT 4.0 started to pull not only drivers, but the entire windowing system into kernel space, for performance reasons. This was, for a large part, the reason for the infamous "blue screen of death" so frequently encountered by MS users.
First of all, let's assume we have access to the codebase of whatever Windows variant we want to start from.
Irrelevant. Unless we are looking at some very convoluted division of responsibilities here and the total absence of well-defined APIs, there is no reason for access to soucre code being essentially for any party other than MS itself for the incorporation of Windows to RPi.
In short, "only MS can port Windows to the Pi".
It may be that the Vista/Win7/Win8 family of kernels are still multi-platform, but I strongly doubt it. You'd probably have to start with something at around the Win2K point.
Again, don't get me wrong here, but I do believe I have phrased my question carefully enough to exclude any speculations that are without technical merits. Unless you have something to substantiate your doubt, then I'll simply have to kindly ask you, again, for some substantial evidence to support your claim.
Okay. I cannot point you to code, as what I have seen was under an NDA. I have not seen the Vista/7/8 codebase. I have, however, seen the XP codebase, and there's good reason it doesn't form the base of Windows CE. Windows XP embedded does exist (my chairlift runs it), and is allegedly at least partially based on the XP desktop kernel codebase, but it only runs on x86-alike platforms.
If you want to start from the Win8/ARM codebase, you'd have to backport all the ARMv7 stuff it uses to use ARMv6 equivalents where possible
I am not sure I am missing something here – and pushing aside the fact Windows embedded lines support ARMv6 from the word "go" - but isn't this supposed to be the kind of stuff that you take care of in the compiler?
No. No it's not. It's the kind of stuff that you take care of in source code. There's not an enormous amount of it, but it needs to be extremely stable.
slim the whole thing down to work within the amount of memory available
As I have pointed out several times, this has been done once on the server side of things, and I am not seeing that the Server Core install option is going to vanish in the next release of Windows Server, which, I must add, has always been based on the same things released for the client editions of Windows.
So, like I said, "slim it down". Server Core requires 512MB of ram. You've got 224MB, tops, to play with on the Pi. That's what makes it difficult and a lot of work.
and somehow work out a way of getting it to run at anything above a snail's pace (the Pi's ARMv6 core is *significantly slower per-cycle than a single ARMv7 core, which Win8 is demanding 2 of, and at a significantly higher clockspeed).
...I expect reasoning well-substantiated with technical details...
Okay, then. Win8/ARM requires, as I understand it, at least a dual-core Cortex-A9 processor running at 1GHz+. That's not "this device has this processor", that's the baseline
. A single Cortex-A9 core is approximately twice as fast (2.5DMIPS/Mhz) as the Pi's ARMv6 (1.2DMIPS/Mhz); a dual core running at 1.3 times the clock speed is - well - you do the math.
Sure, you can run stuff under the baseline. But platform optimisation is aimed at making the baseline responsive, which leaves you with a platform that's got a processor executing codeat best(okay, okay, I'll do the math) 2 * 2 * 1.3 = around five times slower than the baseline.
You see that 2 GHz machine on your desk that's just about usable for Vista? Downclock it to 400MHz and start developing games on it. That's your Pi running Win8, that is.
The "secure boot" issue is not thorny for ideological reasons, it's thorny for hardware reasons – you can't implement it on the Pi platform. Thus, you are not going to get MS's approval to run Win8/ARM on the Pi unless they decide to throw away that restriction.
I don't see why that would be an issue at all unless I have already formed an opinion on whether they will hesitate to forgo this supposed "restriction". (Needless to say, I have not.)
Meh. MS won't forgo it unless they're forced to do so. It really is that simple. Win8/ARM == hardware lockdown. Remember, they aren't licensing Win8/ARM to just anyone, they are licensing to specific suppliers, for specific platforms, and one of their requirements is that those platforms feature UEFI lockdown.
In what way would it be in the interests of the foundation to do so?
Because, say, Windows is a popular operating systems?
Mere popularity does not make it a good tool for education. I'm not convinced that Linux is the right tool either.
In what way would it be beneficial to have Windows of some variant running on the Pi, given that it's not going to be able to run any existing software.
I do believe that the userland side of the issue is well taken care of by the existing development frameworks of the platform and, if possible, the involvement of MS.
"It's only a recompile away". How many times have we heard that? The same has been said of Windows CE, but go find yourself some software to run on it. Software doesn't magically become available unless developers take the time to port, debug, and package it.
Oh, and Win8/ARM software will only be available through MS's "Windows Store", and *only* code targetting the Win8/ARM APIs will be allowed on that store. So forget getting that odd application you wanted running on it unless the developer is willing to shell out for an account on the store.
This is not to mention that development tools for Windows are also pretty well-established due to the commercial nature of the platform - and, if I am not mistaken, this ought to take care of the "Computer Science" side of the picture pretty thoroughly.
It might, if MS were to target anything other than their own languages. However, it would be significantly easier and cheaper for MS to simply provide a "loss leader" teaching pack to be used on the PCs that schools already have.