Testing doesn't find everything (but you still need to do it), so by ensuring you are getting as much as possible 'tested' at compile time, you will improve the final time to market. Less bugs to fix later on when bugs take longer to fix anyway.Heater wrote:jamesh,I have always favoured a strongly typed language for exactly that reason and it has always been "common wisdom". Pascal was always a good example and Ada perhaps king of the compile time checkers. C was always a bit lax, especially early compilers that seemed to be able to make code out of any random source text. C++ is a lot better.Getting rid of problems early, as with type checking like this, save a lot of time testing, debugging, retesting later on.
But recently I have started to think it makes little sense. Is that common wisdom true?
Firstly, as I said, if you are serious about your program's correctness you are going to have reviews of everything and multiple levels of testing, unit tests, integration tests etc etc. Ergo, all that compile time checking is not saving you from a huge pile of verification work anyway.
Thirdly, all that type checking and such does not of course save you from all the other logical errors you can make in your code. Which brings us back to testing...
I recently spent some time porting some Videocore code to run under linux. Even simply moving to a different compiler (gcc) showed up quite a few programming issues (the latest GCC really does dig out some interesting faults), and then running under valgrind shows up various memory leaks and threading issues! So even without writing any test code, and just using standard tools, I found a load of issues. And testing on top of that should show up a load more. Now, automating the valgrind run on each checkin means we keep that part of the codebase clean (relatively).