I can't turn up the link now, but I saw a blog post a couple of weeks stating flatly that C++ was unusable because the compile times were too slow. It had to bounce around in the back of my head for a few days before it occurred to me that this was a weird mirror of the classic reason why some other language than C++ would do. I can't tell you how many times I've heard someone say it didn't matter that C++ generated faster code than all the trendy languages, because they were all fast enough. Am I to judge from combining these two statements that the only time speed is important today is when you are compiling?
For what it's worth, I'd love it if C++ compiled faster. And I'd have a whole lot of happy customers if I could double the speed of my C++ code.
Saturday, May 30, 2009
Thursday, May 28, 2009
Factor
Reading Hacker News lead me to the Factor language. For some reason the language's pages seem to feel the need to use a lot of fancy language, but as I understand it the gist of it is simple and beautiful: it's Forth crossed with a modern functional programming language.
Back when I was a budding programmer with a Commodore 64, Forth was a godsend. A vastly better language than old Microsoft Basic, fast, terse, and powerful. On a par with C, but with a design simple enough you could probably code a Forth system from scratch in a day or two. And it was extensible in a way more reminiscent of C++ than C.
On the other hand, it had too many shortcomings to be really viable as a general purpose language. Which is where Factor comes in. It's a stack-based language with an obvious Forth ancestry, but instead of the stack being limited to integers, you can put sequences, strings, and even anonymous functions on it. It makes the language considerably more elegant, and a bit more weird.
For instance, here's a factorial function in Forth. (From long-rusted skills, and I don't have a Forth interpreter handy to test it, so it may not be quite right. Also, I believe not all versions of Forth allowed recursive calls without additional magic.)
Note that strange
By constrast, here's the version I just implemented in Factor.
The "( n -- n2 )" bit is a mandatory comment describing the function's inputs and outputs. (Maybe it has additional meaning I don't understand yet?) Code surrounded by brackets is an anonymous function (don't know if that's the terminology they use). So in Factor, instead of the tricky implementation of Forth's
So Factor seems to embrace anonymous functions in a big way, and creating them is completely natural in it. It's a neat idea, and I look forward to playing around with it some more.
And hey, it even has a unit testing framework built in!
Back when I was a budding programmer with a Commodore 64, Forth was a godsend. A vastly better language than old Microsoft Basic, fast, terse, and powerful. On a par with C, but with a design simple enough you could probably code a Forth system from scratch in a day or two. And it was extensible in a way more reminiscent of C++ than C.
On the other hand, it had too many shortcomings to be really viable as a general purpose language. Which is where Factor comes in. It's a stack-based language with an obvious Forth ancestry, but instead of the stack being limited to integers, you can put sequences, strings, and even anonymous functions on it. It makes the language considerably more elegant, and a bit more weird.
For instance, here's a factorial function in Forth. (From long-rusted skills, and I don't have a Forth interpreter handy to test it, so it may not be quite right. Also, I believe not all versions of Forth allowed recursive calls without additional magic.)
Note that strange
if - else - then
construct, which was a control structure which took a bit of magic to implement -- it takes a boolean off the top of the stack, and then jumps to either the bit after the if
or the bit after the else
, with both paths merging at the then
.By constrast, here's the version I just implemented in Factor.
The "( n -- n2 )" bit is a mandatory comment describing the function's inputs and outputs. (Maybe it has additional meaning I don't understand yet?) Code surrounded by brackets is an anonymous function (don't know if that's the terminology they use). So in Factor, instead of the tricky implementation of Forth's
if - else - then
construct, the if statement is dead simple to implement. (Seriously, it's one short line of code -- the help system actually has the code in it!) Basically it takes three things on the stack: a boolean and two anonymous functions. If the boolean is true, it executes the first anonymous function; if false, the second.So Factor seems to embrace anonymous functions in a big way, and creating them is completely natural in it. It's a neat idea, and I look forward to playing around with it some more.
And hey, it even has a unit testing framework built in!
Saturday, May 23, 2009
How Can You Do TDD When You Don't Know What The Answer Is?
I'm afraid my issues with TDD are going to be a recurring theme on this blog.
Let me state up front that tests are awesome. If you can set up unit tests, they are a fantastic development aid. I can't begin to say how nice it was to be able to pound on my unit tests when I did my recent major rewrite of my code. I love those tests, and wish I'd been more thorough writing them for the last decade. And my overall test suite is an essential development tool. I'd be lost without it.
But... the entire "test first" TDD development strategy depends on knowing what your code is supposed to be doing at a detailed level AND being able to write a test for it. (At least as I understand it.) For instance, it's easy to imagine how to check your amortization program to see if it is giving you the correct answers. TDD examples are full of stuff like this. But how do you test your MP3 encoding program? Your regular test suite cannot incorporate a double-blind panel of listeners with a variety of sound reproduction equipment to ensure the music sounds good to human ears, or better than some standard. You can do all sorts of tests to make sure low-level code is working the way you expect it to. But that testing will always miss the essential issues. You can automatically test that the component algorithms work, but you cannot automatically test that you are using them correctly.
Needless to say, my work looks more like the messy second case than the tidy first case. A lot of my development work is essentially experimental reverse engineering -- what interpretations of this data are needed to make coherent geometry from them? I have the advantage over the MP3 developers that the basic coherency tests can be automated (though they take hours to run in parallel on a pretty fast quad core machine). But they are just rough and ready tests that the data we are generating from the file mostly seems sensible, and I've never been able to get them close to having no failures. It's always a statistical test -- is the success rate greater after the change -- rather than an absolute right or wrong test.
Let me state up front that tests are awesome. If you can set up unit tests, they are a fantastic development aid. I can't begin to say how nice it was to be able to pound on my unit tests when I did my recent major rewrite of my code. I love those tests, and wish I'd been more thorough writing them for the last decade. And my overall test suite is an essential development tool. I'd be lost without it.
But... the entire "test first" TDD development strategy depends on knowing what your code is supposed to be doing at a detailed level AND being able to write a test for it. (At least as I understand it.) For instance, it's easy to imagine how to check your amortization program to see if it is giving you the correct answers. TDD examples are full of stuff like this. But how do you test your MP3 encoding program? Your regular test suite cannot incorporate a double-blind panel of listeners with a variety of sound reproduction equipment to ensure the music sounds good to human ears, or better than some standard. You can do all sorts of tests to make sure low-level code is working the way you expect it to. But that testing will always miss the essential issues. You can automatically test that the component algorithms work, but you cannot automatically test that you are using them correctly.
Needless to say, my work looks more like the messy second case than the tidy first case. A lot of my development work is essentially experimental reverse engineering -- what interpretations of this data are needed to make coherent geometry from them? I have the advantage over the MP3 developers that the basic coherency tests can be automated (though they take hours to run in parallel on a pretty fast quad core machine). But they are just rough and ready tests that the data we are generating from the file mostly seems sensible, and I've never been able to get them close to having no failures. It's always a statistical test -- is the success rate greater after the change -- rather than an absolute right or wrong test.
Thursday, May 21, 2009
Should You Test Code You Use From Other People's Libraries?
So, I just lost hours (possibly days) because of a phenomenally stupid bug in a library my software uses. (The reason I'm not sure how long is because I don't yet know if this bug has been causing the slowdown I've been tracking down for a week, or if it has just been making the real bug harder to find.)
If the makers of this library had even the crudest unit test in place for this function, they would have detected the bug instantly. But obviously they didn't have such a test. So the question is: should I have had such a test in place?
My tests didn't pick up on the bug in an obvious fashion because my code is designed to be very tolerant to faults in the incoming data. (That's an absolute must in my line of work.) In particular, the bug only showed up this time because I was using the results of the broken function in a new and different way. In that case, the difference was startling -- one test run took 3 seconds before that change, and appeared to run forever afterward. (I think I let one test run go for over a week.)
How paranoid should your unit tests be? Should you test all the functions you use in a third-party library? What about the system libraries? What about your compiler?
If the makers of this library had even the crudest unit test in place for this function, they would have detected the bug instantly. But obviously they didn't have such a test. So the question is: should I have had such a test in place?
My tests didn't pick up on the bug in an obvious fashion because my code is designed to be very tolerant to faults in the incoming data. (That's an absolute must in my line of work.) In particular, the bug only showed up this time because I was using the results of the broken function in a new and different way. In that case, the difference was startling -- one test run took 3 seconds before that change, and appeared to run forever afterward. (I think I let one test run go for over a week.)
How paranoid should your unit tests be? Should you test all the functions you use in a third-party library? What about the system libraries? What about your compiler?
Wednesday, May 13, 2009
Tuesday, May 12, 2009
Unexpected!
I've been a full-time professional programmer for over 15 years now, but I've never been to a programming conference of any sort. But I just signed up for Stack Overflow DevDays in Toronto. There's a decent chance I'll learn something useful there, a chance to network, it's dirt cheap, and best of all, it's in Toronto on a Friday! Which means in the worst case, I have a deductible trip that will let me hit the Thursday night Irish traditional music session at Dora Keogh.
Wednesday, May 6, 2009
Pop another level
I'm trying to reverse engineer a surface definition from (essentially) its data structure. To do so, I've defined a skeleton surface type to hold that data structure (a few hours' work) with the goal of being able to visually look at that data structure along with the edges trimming the surface.
Only problem is, now that I've got it this far, I see that many of those edges are specified using a curve type we also don't support. So I need to reverse engineer the curve's definition successfully before I can work on the surface's definition. Maybe.
This brings up a question I've been pondering. How do you unit test something when you don't know what it is supposed to do? Certainly I can come up with useful tests for a (say) offset surface curve. But how can I tell that my reverse engineering of the file's data structure to that curve type is the correct interpretation of the data structure, other than throwing a lot of real world data at it and seeing if the result looks plausible?
Only problem is, now that I've got it this far, I see that many of those edges are specified using a curve type we also don't support. So I need to reverse engineer the curve's definition successfully before I can work on the surface's definition. Maybe.
This brings up a question I've been pondering. How do you unit test something when you don't know what it is supposed to do? Certainly I can come up with useful tests for a (say) offset surface curve. But how can I tell that my reverse engineering of the file's data structure to that curve type is the correct interpretation of the data structure, other than throwing a lot of real world data at it and seeing if the result looks plausible?
I tried to use my fancy Perl 6 smart search-and-replace from TextMate in the wee hours this morning, only to have my selected code replaced by the following block of text:
I hadn't really thought of what would happen if I got a compiler error running a script, but really, this is a pretty happy way for it to work -- complete error message where it cannot be missed, and all I had to do was hit command-Z to get my code back. Also, this is a great error message, because it told me exactly what I needed to fix my code and get it running again.
prefix:<=> has been superseeded by $handle.lines and $handle.getWhoops! Guess this is the first time I ran the script since updating to the latest Perl 6.
current instr.: 'die' pc 17498 (src/builtins/control.pir:225)
called from Sub 'prefix:=' pc 7076 (src/classes/IO.pir:58)
called from Sub '_block14' pc 157 (EVAL_18:77)
called from Sub '!UNIT_START' pc 19469 (src/builtins/guts.pir:380)
called from Sub 'parrot;PCT;HLLCompiler;eval' pc 950 (src/PCT/HLLCompiler.pir:529)
called from Sub 'parrot;PCT;HLLCompiler;evalfiles' pc 1275 (src/PCT/HLLCompiler.pir:690)
called from Sub 'parrot;PCT;HLLCompiler;command_line' pc 1470 (src/PCT/HLLCompiler.pir:791)
called from Sub 'parrot;Perl6;Compiler;main' pc 24292 (perl6.pir:164)
I hadn't really thought of what would happen if I got a compiler error running a script, but really, this is a pretty happy way for it to work -- complete error message where it cannot be missed, and all I had to do was hit command-Z to get my code back. Also, this is a great error message, because it told me exactly what I needed to fix my code and get it running again.
Subscribe to:
Posts (Atom)