Measuring a programmer’s productivity is a difficult chore mainly because analyzing the final output of a programmer’s activity (software, usually) can be a very challenging task. Furthermore, what are our guidelines for identifying good software? Speed of execution (or of development), needed space, conformance to requirements, being readable, battery consumption…? All of these and even more? During my college years I remember taking a course on Assembly language. Our teacher used a simple scoring method: the best program (and thereby the program receiving the maximum grade) would be the one with the smallest count of lines of code, and of course the program would also have to generate a correct output. Well, the lines-of-code scheme is a simple, direct measuring system, but it’s rarely effective. I was reminded of this college course by this text I read today at computerhistory.org:
When the Lisa team was pushing to finalize their software in 1982, project managers started requiring programmers to submit weekly forms reporting on the number of lines of code they had written. Bill Atkinson thought that was silly. For the week in which he had rewritten QuickDraw’s region calculation routines to be six times faster and 2000 lines shorter, he put “-2000″ on the form. After a few more weeks the managers stopped asking him to fill out the form, and he gladly complied.
Needless to say, another proof of Bill Atkinson‘s genius.