Sorry don"t know how quotes work on this forum :s
Since when? The embedded systems I work on use rs232 and usb… And who says the R-Pi is an embedded system? It"s not. It"s a full blown Linux computer running a multi-user, multi-tasking operating system with local storage and a built-in TCP/IP stack and 1000"s (nearly 30,000 in Debian) of software packages already avalable.
And lots of people use debuggers and emulators, but a cheap and fast was to get the code on a ARM is through TFTP ect.
It's not just an ARM though. It's a Broadcom GPU that happens to have an ARM built into it. The GPU boots (from what I've been able to gather) by reading specific files from the SD card, and that's hard-wired in, this is the binary "blob", once the GPU is running it graciously allows the ARM to boot which can then access the SD card and memory...
What happens when you want something that isn"t in a repository? I"ve had this lots with my ARM development.
Write it. But you don't need to write it on the R-Pi. You can write and test it on any Linux PC, then cross compile it for the ARM, and as a bonus you now have something that will run on any Linux PC.
Obviously I don"t know as I don"t have one, but why should it?
I brought this up as I"ve read it several times over the last few days, including ( I think ) Gert when asked on a tread
It"s not going to be fast, but is it important? for a start, I don"t think people will be actively devleloping BIG programs for the R-Pi.
How do you know this? I think a lot of people will be running video applications, games, emulators and so on
If it runs Quake, and decodes H264 1080p video out of the box, and also runs other media stuff out of the box then what else is there? Sure - other games, emulators. But they can run on other Linux boxes too.
I'm sure there will be some applications developed just for the R-Pi - but I suspect they will be to do with doing weird stuff directly with the GPU, however once the 3D accellerated routines are written then, as these are standard libraries on all Linux platforms, it shouldn't matter where you run your code.
I"m a professional programmer. I use text windows (xterm), VI (vim) and Makefiles. I use gdb and valgrind on occasion. When developing code for embeded systems (AVR and PIC), I do cross compile on my Linux system and use a (usb) serial line for downloading, testing and debugging, but the environment is the same – vim and makefiles. A personal project I did recently was a dive computer running on a PIC processor. That was about 12K lines of .c and .h files and compiles into about 90KB of object code, all done with vim and makefiles (and gimp which I used for some graphics and the bit-mapped font) but all the paid embedded stuff I do is still text editor, makefiles (cross) compilers and serial downloads.
Lots of people will be using this to run things like video applications ect, I also work on AVR, PIC and Coldfire with lots of IO applications but I personally find it hard to develop debug video stuff on an ARM without a cross compiler and debugger set up
Maybe it's just a mindset thing. Or target application. Fortunately I can choose what I work on and the closest I get to video is small OLED displays.
Well now you know (of) someone who doesn"t
But I also wonder if you/others are missing the real point: The Raspberry Pi is just another Linux box – We"ve had Linux software development for many many years now – so develop and test your software on any other Linux box, then simply copy it over and type "make".
except when people call libraries which don"t exist on there build
These non-existant libraries... My guess is that they're very application specific. Linux does support just about every common library under the sun though. I would be very surprised if someone wanted to compile a program on the R-Pi which compiles OK under a "normal" Linux installation yet wouldn't compile under the R-Pi. There may well be a good reason for it, but I really am struggling to think what. (Hardware specifics I guess, but offhand I can't think what)
You will not be TFTP booting the R-Pi!
I"m not talking about doing that, I"m not talking about using TFTP in the same way you use serial, as a means to get files onto the box
It's already running Linux. Just NFS/SMB/CIFS mount the remote filestore, or scp/rsync the files over. Or take out the SD card and write to it on another system. The supplied Debian image already has reference to an NFS mount, so it looks like the folks who put that together already use NFS. I use NFS in my home/office, so from that point of view, it will mount my home directory off my server just like all the other Linux boxes I use do.
If you are writing your own OS from scratch, or porting something into it to run natively, then that's obviously another story. Will people do this? Are you planning on it? It would be nice to see what people do, but I suspect it's not going to happen except in exceptional circumstances, and even then there are probably other boards more suited to that purpose.
I mean; what are you going to write on the R-Pi running Linux that will not run on any other Linux box, with any other processor that you can compile it to? (other than low-level hardware drivers). There is nothing special about the R-Pi. It"s just another Linux box!
Sorry I"ve just never made that claim,
But you may be right, it might just be like a PC, if it does work out like that it will be brilliant!
It is, It will!
The Raspberry Pi is a PC (Personal Computer) that runs Linux out of the box. It's not designed to be an embedded controller sitting in the seat-back of an airline seat providing video on demand. It could conceviably be used for that, but I'm sure there are more suitable devices.
There are Linux images already out there waiting for the hardware to run on. You can run them (slowly!) under QEMU on another Linux box or under Windows - I have done this purely as an academic excercise because I know that the code I'm writing will simply run on the R-Pi and all I need to do is copy it over and type 'make' (which I've already done in QEMU as a proof of concept).
is a screenshot of QEMU running an ARM emulator, having booted the ARM Debian image being suported by the R-Pi, then running X Windows with the LXDE desktop environment, then running my own BASIC interpreter which I'd previously compiled on the same system under QEMU. (it takes 5 miutes to compile under QEMU vs. 30 seconds on my 1.7GHz Athlon) although running it is surprisingly quick.
When I get my R-Pi, the binary Image that I'm using to boot QEMU will be copied into a 2GB SD card, that card plugged into the R-Pi and it will boot and look identical to that screnshot. (Although It'll get to that stage in under a minute whereas it takes 5 in QEMU)