Why n*x


91 posts   Page 3 of 4   1, 2, 3, 4
by AndrewS » Thu Jun 14, 2012 11:14 am
rurwin wrote:
tufty wrote:We need to abstract away the idea of Applications, the great hulking monolithic things that we install and then flounder with. Take the design ethos of the Unix™ utilities - "do one thing, and do it well".
Putting the Unix ethos into a GUI environment would be a significant challenge, but it should not be impossible, and it too is long over-due.

Isn't that vaguely what ActiveX/GNOME/etc. set out to do originally?
(me goes and hides in his bunker :twisted: )

Sounds like a massive undertaking, and like any "alternative OS" I expect your biggest problems will be inertia and application compatibility. Good luck with it.
User avatar
Posts: 3626
Joined: Sun Apr 22, 2012 4:50 pm
Location: Cambridge, UK
by jamesh » Thu Jun 14, 2012 12:04 pm
Quite. People have been trying to make decent frameworks for years. Yes, they do save time at the time, but are not very good at adapting to the 'next big thing' in UI development. Which was my previous point.

So then someone needs to go and write another framework.

And then another one.

In the end, you still end up with a lot of code.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11782
Joined: Sat Jul 30, 2011 7:41 pm
by nick.mccloud » Thu Jun 14, 2012 12:20 pm
jamesh wrote:Quite. People have been trying to make decent frameworks for years. Yes, they do save time at the time, but are not very good at adapting to the 'next big thing' in UI development.


The next big thing always just involving something Wooooo and Whizzzzy for the marketing people.

90% of the UI we use now we had with Windows 3.1.

I'd vote for a nice shiney straightforward decruded microprocessor with a nice simple OS & GUI. Then the feature addicts can add what they like on top whilst the rest of us get on with something more productive - like using the damn thing for something useful.
User avatar
Posts: 795
Joined: Sat Feb 04, 2012 4:18 pm
by jamesh » Thu Jun 14, 2012 12:24 pm
I'm torn - I think that mobile OS's/GUI's like Android and iOS are really rather good, and considerably better and easier to use than what we had in Win3.1 for example (Or LXDE, which is about that level), but to get that level a lot of code is/was required. That's for consumer level stuff. For average dev purposes, on the whole, yes, Win3.1 level stuff would probably be sufficient. But there are more consumers than Devs.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11782
Joined: Sat Jul 30, 2011 7:41 pm
by rurwin » Thu Jun 14, 2012 12:26 pm
@nick

The situation for Windows is different from that for Linux.

In Windows, Microsoft has to keep the developers busy learning new stuff in case otherwise they decide to learn Linux or MacOS.

In Linux, most of the changes are people having new ideas. There is certainly far less turnover of new frameworks, but more duplication and dead-ends.

My cynicism is showing, your cynicism may vary.
User avatar
Forum Moderator
Forum Moderator
Posts: 2913
Joined: Mon Jan 09, 2012 3:16 pm
by johnbeetem » Thu Jun 14, 2012 6:34 pm
nick.mccloud wrote:90% of the UI we use now we had with Windows 3.1.

I'd vote for a nice shiny straightforward decrudded microprocessor with a nice simple OS & GUI. Then the feature addicts can add what they like on top whilst the rest of us get on with something more productive - like using the damn thing for something useful.

I've used lots of GUI libraries over the years, including Apollo (pre-HP), Atari ST, X11 (plain Xlib or with Xt + Xm), and Win32. My favorite is still Apple MacIntosh's QuickDraw. It was well-documented and very clean until Mac OS 7 showed up and added so much complexity that it became impractical for an individual to learn everything about Mac and software would have to become team-work. I was sad to see Mac OS 6 go, sadder to lose 680X0 -- though PowerPC is indeed a nice architecture, especially for embedded applications -- and downright disgustipated when Apple went Intel.
User avatar
Posts: 942
Joined: Mon Oct 17, 2011 11:18 pm
Location: The Coast
by Bakul Shah » Fri Jun 15, 2012 2:44 am
tufty wrote:We haven't learned anything in 25 years. We've churned out more and more code, reused less and less of it, and it's, almost without exception, slower and buggier than it ever was before. Grace Hopper must be turning in her grave. John McCarthy would have told you how badly you're doing it all wrong.

Have you read this System Software Research is Irrelevant paper?

So, there you go. Higher levels of abstraction. Much higher levels of abstraction. Operating systems, with the exception of the little bits that *need* to be coded in assembler, written from the ground up in Scheme, or Haskell, or Erlang[5]. Or, more to the point, in a domain-specific language layered over Scheme, or Haskell, or Erlang, designed exactly for implementing operating systems at a high level of abstraction.

Note that there have been hardly any decent OSes in HLL even though a lot of people believe what you say. I wonder why.

My concept at a system level is based largely on treating everything as a signal-processing closure. I could go on at lengths as to why this is a better idea than the POSIX "everything is a stream", but I'll desist for the moment.

We need to abstract away the idea of Applications, the great hulking monolithic things that we install and then flounder with. Take the design ethos of the Unix™ utilities - "do one thing, and do it well". Expand that to the higher level. Deconstruct your applications. Destroy your applications. Look at workflow, not at UI design[6]. Work with concepts, not with mechanics.

I tend to think that the term operating system is essentially a glorified term for a bag of tricks. An OS is basically a scheduling library, multiple address space mapping for sandboxing computation, IO & papering over h/w specific peculiarities. It is the OS that needs to be deconstructed! Ideally I simply want a set of Scheme processes, each running on a very thin hypervisor layer and communicating with each other using S-expressions. And similarly I want the UI layer to communicate with the application specific code in s-exprs. No callbacks!

Stop following the crowd. Stop aping the merely successful. Aim for the superb, and never settle for the mediocre. Write your own OS. Do your own thing. Don't let the naysayers piss in your chips. You've got nothing to lose but a bit of time (and even that is no loss, because you'll be learning, and hopefully teaching, something). Document what you're doing, if you want people to follow in your footsteps. But, above all,

REBEL!

But beware! It is not enough to not follow the crowd. Example: Linux. Why did Linus have to replicate a (then) 20 year old OS, which in turn had used OS concepts that were 5 to 10 years old in 1970? Only in hindsight can you say what you produced is superb. Superb in the real world as opposed to an ideal world.

Meet the new bOSs.
Same as the old bOSs : )
Posts: 293
Joined: Sun Sep 25, 2011 1:25 am
by rurwin » Fri Jun 15, 2012 6:36 am
Bakul Shah wrote: Ideally I simply want a set of Scheme processes, each running on a very thin hypervisor layer and communicating with each other using S-expressions. And similarly I want the UI layer to communicate with the application specific code in s-exprs. No callbacks!


And that is the problem with an operating system written in a very high level language. What about if I want to write a C program or an assembler program? What is the ABI?
User avatar
Forum Moderator
Forum Moderator
Posts: 2913
Joined: Mon Jan 09, 2012 3:16 pm
by Bakul Shah » Fri Jun 15, 2012 9:13 am
rurwin wrote:
Bakul Shah wrote: Ideally I simply want a set of Scheme processes, each running on a very thin hypervisor layer and communicating with each other using S-expressions. And similarly I want the UI layer to communicate with the application specific code in s-exprs. No callbacks!


And that is the problem with an operating system written in a very high level language. What about if I want to write a C program or an assembler program? What is the ABI?

Ah, I was afraid someone was going to ask that : )

Short answer: the basic API consists of send & recv and any s-expr argument or result is linearized. Your C program would be in its own address space and as long as it can send/receive s-exprs it can communicate. Yes, you'd probably need a library to manipulate s-exprs.

There is a much longer answer but I am not prepared to get into any details until I am satisfied with the design.
Posts: 293
Joined: Sun Sep 25, 2011 1:25 am
by tufty » Fri Jun 15, 2012 11:30 am
rurwin wrote:And that is the problem with an operating system written in a very high level language. What about if I want to write a C program or an assembler program? What is the ABI?

It's not a problem at all.

Assembler can be (at least with what I'm working on, but most lisp->native code compilers work this way as well) easily expressed in s-expression form. Indeed, It's usually better to target a non-machine-specific pseudo assembler, and then have the assembler apply target specific optimisations when generating the actual bytes that will be fed to the CPU / CPUs / GPU, whatever.

So if you find yourself in the rare situation where you actually need to write an assembler program, you generate your assembler source code as a stream of sexprs, and run it through the assembler. It's actually no different to the current situation, except that the syntax looks different. Something like this (which I just rattled off the top of my head, so it's full-o-bugs, but you get the idea).

Code: Select all
((global 'cons)
 (mov r2 (address-of next-cell))
 (ldrex r3 [r2])
 (strd r0 r1 [r3] 8)
 (strex r4 r3 [r2])
 (cbnz r4 'cons)
 (mov r0 r2)
 (bx lr))


For C, why would you bother? It provides no benefits over a higher level language (the inverse is not the case, where a higher level language has to be built on a massive layer of veneer that hides the C-nature of the APIs it has to deal with). C is less expressive, more verbose, and fundamentally unsafe to program in. Leave it behind, and deal in concepts.

Simon
Posts: 1368
Joined: Sun Sep 11, 2011 2:32 pm
by jamesh » Fri Jun 15, 2012 12:02 pm
Bakul Shah wrote:
rurwin wrote:
Bakul Shah wrote: Ideally I simply want a set of Scheme processes, each running on a very thin hypervisor layer and communicating with each other using S-expressions. And similarly I want the UI layer to communicate with the application specific code in s-exprs. No callbacks!


And that is the problem with an operating system written in a very high level language. What about if I want to write a C program or an assembler program? What is the ABI?

Ah, I was afraid someone was going to ask that : )

Short answer: the basic API consists of send & recv and any s-expr argument or result is linearized. Your C program would be in its own address space and as long as it can send/receive s-exprs it can communicate. Yes, you'd probably need a library to manipulate s-exprs.

There is a much longer answer but I am not prepared to get into any details until I am satisfied with the design.


Isn't that going to be pretty slow? Lots of parameter serialisation and de-serialisation? Not including all the communication delays.
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11782
Joined: Sat Jul 30, 2011 7:41 pm
by rurwin » Fri Jun 15, 2012 12:21 pm
tufty wrote:
rurwin wrote:And that is the problem with an operating system written in a very high level language. What about if I want to write a C program or an assembler program? What is the ABI?

It's not a problem at all.

Assembler can be (at least with what I'm working on, but most lisp->native code compilers work this way as well) easily expressed in s-expression form. Indeed, It's usually better to target a non-machine-specific pseudo assembler, and then have the assembler apply target specific optimisations when generating the actual bytes that will be fed to the CPU / CPUs / GPU, whatever.

I know that. I've even programmed assembler from Forth, which is even more interesting. I was talking about the ABI. In order to write efficient assembler or C code, and if you are writing in assembler or C when there are much nicer languages available, one assumes you are doing it to be efficient, then you need to be able to call on operating system functions quickly and cleanly. Creating s-expressions, especially flat ones, is neither clean nor easy, and it certainly is not as fast as stacking half a dozen numbers.

For C, why would you bother? It provides no benefits over a higher level language (the inverse is not the case, where a higher level language has to be built on a massive layer of veneer that hides the C-nature of the APIs it has to deal with). C is less expressive, more verbose, and fundamentally unsafe to program in. Leave it behind, and deal in concepts.

For my own perverse reasons, OK? Do you want me to use your OS or not? I mean there's this guy down the road that's got an OS too and his runs C and assembler like doodah off a whatsit.

There are many, many reasons I might want to run assembler or C. Maybe I've got the source code for a codec that I want to run from my music/video player. It's 100k of tight C; do I really have to rewrite it in Lisp? Maybe I've got this great program, but I have to code a central algorithm in assembler because the RaspPi is only just powerful enough to run it.

Maybe I like C. Maybe I have a class exercise to do. Maybe I feel like slumming it. Maybe I feel like finding out how many languages I can write Hello World in.
User avatar
Forum Moderator
Forum Moderator
Posts: 2913
Joined: Mon Jan 09, 2012 3:16 pm
by tufty » Fri Jun 15, 2012 3:44 pm
rurwin wrote:I was talking about the ABI. In order to write efficient assembler or C code, and if you are writing in assembler or C when there are much nicer languages available, one assumes you are doing it to be efficient, then you need to be able to call on operating system functions quickly and cleanly.

I'm personally using the ARM EABI (hard float calling conventions), with a minor twist that won't affect C functions anyway. The API is liable to be much more of a sticking point, because you won't get any of that POSIX stuff. No streams. No malloc() and free(). No FILE*. None of it. In fact, the system API doesn't go much further than 'cons.
rurwin wrote:Do you want me to use your OS or not?

Quite simply put, I don't care much either way.

Simon
Posts: 1368
Joined: Sun Sep 11, 2011 2:32 pm
by AndrewS » Fri Jun 15, 2012 3:50 pm
rurwin wrote:Maybe I've got the source code for a codec that I want to run from my music/video player. It's 100k of tight C; do I really have to rewrite it in Lisp?

Purely out of curiosity - are there any video players written in Lisp? ;) Tried a quick google but could only find video tutorials about Lisp.
User avatar
Posts: 3626
Joined: Sun Apr 22, 2012 4:50 pm
Location: Cambridge, UK
by Bakul Shah » Fri Jun 15, 2012 4:37 pm
jamesh wrote:
Bakul Shah wrote:Short answer: the basic API consists of send & recv and any s-expr argument or result is linearized. Your C program would be in its own address space and as long as it can send/receive s-exprs it can communicate. Yes, you'd probably need a library to manipulate s-exprs.


Isn't that going to be pretty slow? Lots of parameter serialisation and de-serialisation? Not including all the communication delays.

It depends! Note that syscalls are pretty expensive in any case (if you discount benchmarking optimizations such as for getpid()). Try read/write on unix! Second, for any complex structure args/results (in Unix) you either have to printf/parse (loose coupling between processes) or use a shared lib (tight coupling - which has its own set of problems). S-exprs seem to be at the right level of abstraction as they are self-identifying and can be used in place of sending text lines across (unix pipeline) or shared libs.

For a message transfer from one addr space to another you have to copy in any case. If you want to transfer lots, just send a page across; no copying involved (as long as the two processes are on the same node -- if the two are not on the same node, you can still play RDMA kind of games to reduce copying). Several other optimizations are possible. But early micro-optimizations would complexify the API considerably so best left for later, to be driven by actual need. And cooperating processes can evolve their own protocols to improve efficiency (like that old joke about a club whose members liked to hear the same jokes -- some one calls out a number, every one remembers what joke that is and laughs. Except for one dyslexic guy who laughs twice!).

Finally, this is just an experiment and I have no idea how it will turn out (or if it will go anywhere).
Posts: 293
Joined: Sun Sep 25, 2011 1:25 am
by obarthelemy » Fri Jun 15, 2012 5:00 pm
DavidS wrote:WE ARE THE HACKERS OF TODAY; LET US REMIND THE WORLD HOW FUN AND FAST COMPUTING CAN BE.


I don't think The World cares much, it just wants its Angry birds, FaceBook, and Twitter apps :-p

You're welcome to do it for your own fun and education, and maybe to impress your peers though :-)
Posts: 1399
Joined: Tue Aug 09, 2011 10:53 pm
by johnbeetem » Fri Jun 15, 2012 6:22 pm
Bakul Shah wrote:
tufty wrote:So, there you go. Higher levels of abstraction. Much higher levels of abstraction. Operating systems, with the exception of the little bits that *need* to be coded in assembler, written from the ground up in Scheme, or Haskell, or Erlang[5]. Or, more to the point, in a domain-specific language layered over Scheme, or Haskell, or Erlang, designed exactly for implementing operating systems at a high level of abstraction.

Note that there have been hardly any decent OSes in HLL even though a lot of people believe what you say. I wonder why.

I would venture to guess that it's because the kind of people who write good operating systems prefer to work at the C or ASM level. They like to know where their data is at all times, and how and when their data structures are allocated and freed. They like to know where to find each field in a data structure. These are nice attributes to have when writing compilers, communication software, and embedded applications as well. Higher-level abstractions like object orientation and run-time data types get in the way.

I guess it just boils down to how far you want to stay away from your computer hardware, i.e., how many layers of condoms do you want?
User avatar
Posts: 942
Joined: Mon Oct 17, 2011 11:18 pm
Location: The Coast
by DavidS » Sat Jun 16, 2012 6:16 pm
phrasz wrote:
DavidS wrote:WE ARE THE HACKERS OF TODAY; LET US REMIND THE WORLD HOW FUN AND FAST COMPUTING CAN BE.


1) Computer/ Electrical Engineers/Scientists/etc are not hackers... You can have all sorts of hackers that are not "educated" ... i.e. that 14 old kid that's owning us all right now b/c he doesn't have a job and will out code us in our sleep.

Ok, At least it used to be that any good 'Coder/Programmer/Software Engineer' was considered a 'Hacker'.
2) I'd like to amend your statement to read: FUN OR FAST. Fast == microprocessors and flashing of ram, Fun == OS'es (media/games/interactivity). Sure you can always use links and play zork, but OS's with a GUI is what made computing what it is today.

OS + Multitasking + Memory Protection can be both fast and fun.
Ok there is a very small sacrifice in the time it takes to map in more memory, or use the correct module to copy a pixel mapped image to the screen, or parse the clipping rect list when updating a window, though with a well thought out OS this can be less than 0.2% of the time.
I personally think some of the previous comments are unrepresentative of the true issue at hand: Why n*x? Because it's free, runs on near everything, and HAS AN OS with A GUI.

This is true and the very reason for raising the issue in the first place. This never had to be the case.
I'm a mentor to ~8 local high school kids, and command line/terminal activities is like pulling teeth b/c they've (my self included with Windows 3.1 ) have never NOT had a GUI.

Well then give them a GUI. How much time does a GUI take away from the system? Not much.
Show me an awesome, OOB, OS, that can support a GUI, that's FREE, and has the generic activities like "the internet" and "a music/video player" and I'll point back to n*x. Oh, I forgot it also needs free licensing, a huge support base, and can run on ARM.

This is currently true. We can not change the situation if we just settle for it because it is there.
So having the previous target market, with the current platform, the absolute requirement of FREE, and you have n*x. I encourage your efforts, but it's kinda like stating you want to reinvent the internet to work on the IPX protocol...

No. It is more like the development of MiNT (which is a GOOD n*x). Also I should note that the problem sited at the start of this thread is not n*x in and of itself, though rather the bloated variants that we currently have. Unfortunately creating another n*x is just an invite to port all the bloat to another n*x.
A note on security:
It's NOT the kernel's fault. Security is a constant battle between itself, speed, and accessibility (ease of use). Thus, with this tripod security will ALWAYS lose out. Another part is security is ALWAYS an afterthought: "we'll patch/fix/plug a finger in the dam later". Ease of use demands the lack of security to be universal, because if you release to everyone what your super-secret-password-code-thing is anyone will be able to bypass it and the security has now failed.

The security questioned here refers to memory protection, semaphores, etc. These things have to be built in from the bottom up (for just patching them on top creates Bloatware and does not work correctly).
Security needs to be a part of the design, yes, BUT if the other two parts win out security will be dropped in a moments notice (see RasPi's debian image with no netfilter/iptables). However, that doesn't mean the sky is falling, rather you now need to determine how to security that black box.

As stated wrong meaning of security.
ARM Assembly Language: For those that want: Simple, Powerful, Easy to learn, and Easy to debug.
User avatar
Posts: 1251
Joined: Thu Dec 15, 2011 6:39 am
Location: USA
by DavidS » Sat Jun 16, 2012 7:18 pm
jamesh wrote:
cheery wrote:It's really simple really. Linux is there now and it's really useful right now. It catches improvements regularly. If we were doing something practical it'd be the kernel we would use anyway at the moment.

But if nobody dispatches on playing near with the hardware, who will improve linux tomorrow? Shouldn't there be people who get things further too eventually?

Also, you don't need to discourage people from getting on hardware programming. I'm sure the protocols and all that mess will take care of that.


There's nothing to stop people who want to work on OS's working on Linux right now - you don't need to wait until tomorrow! In fact,I would recommend it. Going to be decent demand for people with low level Linux experience I think. There's always new HW out there that needs drivers. You have all the source for the Raspi kernel drivers. Good place to start.


Yes we could take the 20 man years to clean up the Linux kernel, get it working in a reasonable manner, and once we get this done take another 80 or so man years to clean up the user-space components get things working correctly, and efficiently. Or better we could even port a better n*x and prey that the bloat does not follow.

For a better n*x that could be ported, and would take much less time to clean up completely I vote for Free MiNT (For the ST) + DR-VDI + XaAES + TeraDesk.

For a create your own, make sure to support a decent subset of the POSIX API + a GUI having an API that is similar enough to an existing one to simplify porting apps (GEM is a simple API, and has proven capable with Preemptive Multitasking MultiThreaded OS kernels[Flex OS, and MiNT], so a GEM like API is sensible).
ARM Assembly Language: For those that want: Simple, Powerful, Easy to learn, and Easy to debug.
User avatar
Posts: 1251
Joined: Thu Dec 15, 2011 6:39 am
Location: USA
by DavidS » Sat Jun 16, 2012 7:32 pm
jamesh wrote:
tufty wrote:That's because "modern" software, and particularly the ways we build modern software, are more rubble than rococo.


Well, if you think making software easy to use for the average human as rubble, then maybe. But modern software is easy to use (usually!), because there is a lot to it. Now, I'm not saying there isn't bloat -there is. Why these packages needs to be so many megabytes is beyond me. Bad programming is my guess. Point being, an easy to use program of any relatively complex task does require a lot of software.

Now this point I would disagree. Take a good Word Processor, looking at abiword there is so much redundant code, and space for memory leeks that it is ridiculous. Take on the other hand something like First Word Plus, having less than 40KB of executable code and all of the features of AbiWord, Except for supporting a tone of third party document formats (this could be remidied with a set of extremely small translators). Most that have used both would agree that First Word Plus (a GEM app) is a lot easier to use than AbiWord.

The same goes for Operating Systems.
Being realistic, do you realy think that KDE or GNOME is easier to use than GEM?
ARM Assembly Language: For those that want: Simple, Powerful, Easy to learn, and Easy to debug.
User avatar
Posts: 1251
Joined: Thu Dec 15, 2011 6:39 am
Location: USA
by DavidS » Sat Jun 16, 2012 7:56 pm
High level languages are good even for OS devel. Just be careful, do not become complacent and expect the languages garbage collection, and allocation schemes to work as you expect (to many memory leaks originate from trusting an HLL).

The main point of paradigm shift that I think would be helpful are set in a set of rules for coding the first version of any project that I have followed since I was 13 years old. These rules are:
Code: Select all
Rules for Developing Good Software (up to v1.0.0):


0: Do not release any thing until you reach v1.0.0

1: KISS (Keep It Simple Silly): This is the single most important rule.



2: Define the problem in excruceating detail befor defining the project.



3: Define the complete desired Functionality, user experience, and purpose of
    the project before designing the project.



4: Design the project from the ground up in a top down maner before writing a
 single line of code.



5: Agree on the expected function of every procedure, its interface to the rest
    of the code, and its effect(s) before writing it, and then do not deviate.



6: Test every procedure to the extreme.  As soon as you write a procedure
    test it as much as you can, write your one test code.  Were possible test
   all possible parameters with the desired effect (not always possible).  If
   that is not possible test the procedure as far as possible.



7: Module test.  Test complete modules as completely as you do procedures.
    Then test the components built from these to the same extreme.
   No bug is acceptable.



8: After the complete product is working label it version 0.1.0 (your not done
    yet), and rewrite every thing from scratch (repeating steps 5 through 7
   above). Compare the resaults, use the best code from both passes as version
   0.2.0.



9: Debug.  Some bugs will still be there.  Take the time to figure out how to
    find any possible bug and once found fix them with out breaking anything
   else. And this will lead to version 0.3.0



10:Optimize.  Do whatever optimizations can be done without risking breaking
    anything.  This gives you version 0.4.0.



11:Repeat steps 9 and 10 to get versions 0.5.0, 0.6.0, 0.7.0, 0.8.0, and 0.9.0.

12:To go from 0.9.0 to 1.0.0, take the debugging to even greater levels.


Please forgive the way these are written, I first typed them in when I was 13.

I also have another set of rules that I follow when creating a revision.
ARM Assembly Language: For those that want: Simple, Powerful, Easy to learn, and Easy to debug.
User avatar
Posts: 1251
Joined: Thu Dec 15, 2011 6:39 am
Location: USA
by Narishma » Sat Jun 16, 2012 9:29 pm
I disagree with most of your rules. More often than not, you can't know in advance if something will work or not before implementing it and testing it, so you can't design the whole system and then not expect to deviate from the design.
Posts: 151
Joined: Wed Nov 23, 2011 1:29 pm
by johnbeetem » Sat Jun 16, 2012 10:05 pm
Narishma wrote:I disagree with most of your rules. More often than not, you can't know in advance if something will work or not before implementing it and testing it, so you can't design the whole system and then not expect to deviate from the design.

I agree with Narishma's dissenting opinion. Unless it's a pretty simple program, deciding up front exactly what it's going to do is extremely limiting. I think it's better to come up with a Plan A, but be ready to switch to Plans B, C, D, etc. when you learn something during detailed design or implementation or testing or walking in the woods that lets you come up with a much better way to think about the problem or implement it.

So here are some different rules I've found effective:

1. Plan on throwing the first one away. It is a learning exercise. It will expose most of the errors in your thinking and salvaging it is likely to be a waste of time. There's a reason a great artist does numerous pencil sketches before committing to oils or marble.

2. Write documentation first. It's a lot easier to fix a problem at the word processor level than once it's coded. Writing exposes holes in your understanding of the problem.

3. Release as soon as you have something that's not going to embarrass you. An early user may discover something you never thought of and being able to incorporate it earlier is better.
User avatar
Posts: 942
Joined: Mon Oct 17, 2011 11:18 pm
Location: The Coast
by DavidS » Sat Jun 16, 2012 10:22 pm
I find it interesting, you say that you disagree, though the reasons that you give are perfect agreement.

Of course you release an early version (that is what I tentatively label v1.0.0) (the actual version number may be lower, the guide is the method, you may divide by 10). And of course you will be doing revisions (like I said I have another set of rules for revisions). And of course the first time you finish the first version of a particular project it is crap. Though as to documenting first, yes this is part of the design phase. You also write the entire thing in pseudo code in comments Before you finalize the design, and the procedures do not get finalized until the particular procedure is implemented. With this in mind reread the rules I mentioned above. They do fit.
ARM Assembly Language: For those that want: Simple, Powerful, Easy to learn, and Easy to debug.
User avatar
Posts: 1251
Joined: Thu Dec 15, 2011 6:39 am
Location: USA
by jamesh » Sun Jun 17, 2012 8:53 am
I'm prefer the Scrum (Agile ) approach. Worth looking at, but boils down to a list of user requirements, in order (which can be added to during development), monthly working releases, and document as you go. You do multiple iterations, but you still end of with a documented and working end product faster. IMO.

Up front cast iron specs don't work. Look at any government project for evidence of that
Unemployed software engineer currently specialising in camera drivers and frameworks, but can put mind to most embedded tasks. Got a job in N.Cambridge or surroundings? I'm interested!
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11782
Joined: Sat Jul 30, 2011 7:41 pm