Since I'm a software engineer, you'd expect me to be alert to developments and trends in that field. You'd be wrong.
When it comes to the extremely faddish field of computer science, I'm a "late adopter." I have work to do, inadequate time to do it, and more demands on my attention than you might imagine. Keeping current with glamorous, strongly touted new offerings -- I call them "brights-and-shinies" -- is typically too time-consuming to be compatible with my obligations.
Alongside that, I'm economically minded. My time is a scarce resource. I spend it carefully, and only when I can get something for it that's worth the expenditure. The myriad fads and fascinations of others in this field, some of whom apparently have more free time than is good for them, usually trail off into the weeds without offering a positive return on investment.
Which is why I read this Wired article with a dismissive chuckle:
At Princeton and the University of California at Berkeley, two researchers are trying to shed some light on why some programming languages hit the big time but most others don't. In what they call a "side project," Leo Meyerovich and Ari Rabkin have polled tens of thousands of programmers, and they're combing through over 300,000 computing projects at the popular code repository SourceForge — all in an effort to determine why old languages still reign supreme."Why have we not been reliably able to improve on C?" Rabkin asks. In the thirty-five years since C was popularized, there have been enormous leaps in the design of software and operating systems, he says. But although C has been beefed up and other new languages have been very successful during that time, C is still a mainstay.
Yes, and why do cars still incorporate steering wheels, accelerator pedals, and brake pedals? The article does make some interesting observations, but it hits the grandest of all clinkers with this:
Most programmers learn three to four languages, the researchers say, but then stop. "Over time, you'd expect that as developers get older, they'd get more wisdom; they'd learn more languages," Meyerovich says. "We've found that's not true. They plateau."Part of the problem is that by the time they hit 35 to 40 years old, they're often moving from hands-on coding to managing other programmers. At that point, there's little motivation to learn and implement new languages.
Those "researchers" should study humility. Perhaps they might stand before a mirror for fifteen minutes each day, and practice saying, "It's not you, it's me," until they can at least make it sound sincere.
Though I do have supervisory responsibilities, I'm still hands-on; I still participate in my group's undertakings as a senior technologist. If these "researchers" were to ask me why I haven't troubled to learn C#, or Ruby, or Python, or whatever the hot language fad is just now, I could tell them in a single sentence: They're irrelevant to my problem domain.
My group's problem domain is real-time simulation. For that domain, nothing has come along that bests C++ and the available supports for it. There are other domains, some of which are more efficiently addressed with other languages and support systems. (I wouldn't dream of addressing a database problem or an artificial-intelligence project with C++.) A good engineer strives always to use the right tool for the job before him.
No doubt there are clever fellows working on a spiffy new language that does address the real-time domain. Perhaps I'll hear about it soon. But I still won't rush to embrace it. A good engineer doesn't adopt an unproven technology for dollars-and-cents work; he waits until those too-much-time-on-their-hands types can show him how their bright-and-shiny really supersedes his older tools. To do that, they must address four principally nontechnological considerations:
- The weight of legacy maintenance;
- The writing-off of prior (intellectual) investment;
- The front-loaded expenditure of money and time required by the new tool;
- Network effects.
These are all economic considerations. Even network effects are about economics rather than technology: they arise from the mass adoption of a technology, such that the size of the user community itself becomes an asset to its individual users. The Department of Defense discovered the importance of network effects with its abortive, highly expensive attempt to impose a new programming language, Ada, upon the defense industry. Today, DoD strongly recommends the use of C++ in new embedded systems.
When a tool is developed with a well-defined problem domain in mind, it can be successful -- if it truly offers its prospective users a way of solving their problems that's superior, not just from a technological standpoint, but from the broader economic perspective. But IBM's PL/I disaster -- its attempt to obsolete all other programming languages with its language PL/I -- should make it clear that there's no such thing as a tool that applies to every kind of problem. Even an attempt to address two disconnected domains with one tool usually produces an unsatisfactory compromise, which is why I don't reach for my Leatherman Multi-Tool when my regular toolbox is within range.
Researchers of the sort quoted in the Wired article seem to miss those points far more often than not -- and then prattle about their superior "wisdom" while utterly unable to explain the contrary decisions of others. Well, perhaps that's inherent in the mindset one must adopt to deem oneself a "researcher." Now if you'll excuse me, a forty-foot oak tree fell across my fence last night, and it's time I went out back and disposed of it. I'm all ready for the challenge; I've got my Swiss Army knife right here in my hand.
What a great pity politicians, social "scientists" and bureaucrats don't share your approach to the adoption of bright and shiny new ideas, Francis.
ReplyDeleteYou mean, use the right tool for the job? Perish the thought!
ReplyDeleteIf they knew anything about being a working developer, they'd be working developers, rather than researchers.
ReplyDeleteI study new tools when those new tools appear to offer greater leverage over the problems in my life than the tools I already know offer. Which is an event that occurs...well...more often for me than it does for Our Host, it seems. Knowing what I know about our respective professional problem domains, this is not especially surprising. Also, it helps that nobody is going to literally _die_ if an app I'm working on has a problem in the upstream toolchain that isn't my fault but will still be blamed on me.
The simpler explanation for the phenomenon these researchers explore is that the tools of the present are adequate for solving all the problems we've already thought up, and to the extent that they're sub-optimal for any given task, most would-be improvers are more likely to focus their efforts on patching or expanding existing tools than developing completely new ones. Improving an existing system usually offers better ROI. Which means it's going to be the preferred route for those people who have real work to get done...that is, anyone whose employer's name doesn't contain the word "University".
We shouldn't forget the Governments Ada disaster. It cost the taxpayers a whole lot more than the PL/I disaster.
ReplyDeleteYou're right. The bright and shiny should be eschewed in favor of the Tools That Work. That said, I find Python *quite* useful, along with PyQt. They won't replace C in my toolbox, they're more like a socket set in addition to the combination wrenches C provides.
ReplyDeletePS --- please, the cloud background on mobile devices is unreadable, making your posts completely unreadable...any interest in fixing it? Thanks!
Most programmers have a bias, simple as that. They like they language they use and they look for reasons to dislike another or a new language. Most of the current popular languages are "pushed" in college and that is what the latest crop of programmers use and they are biased against another language. This is not the same thing as saying the popular languages are the best. If all of the colleges had made a concerted effort to push some now obscure language then IT would be the popular one and it's supporters would justify it by claiming it was more efficient and superior. Every program every piece of code and every resulting machine language created as a result of higher level code is a compromise. But over the last 20 years or so the cost of memory, CPU speed and storage has dramatically decreased. Thus most of the compromise took advantage of this and code has created longer, larger and more complex machine language and efficiency has sufferred as a result. Many years ago a programmer might create a tight 30 instruction machine language routine (because he had to) but today a programmer might do the same task with 200-300 machine language instructions and not even know or care how long the code was or what it did. The question might be asked; does it matter? After all today's computers are faster, have more memory and storage so who cares? But the point remains that some of the popular languages and many programmers have long ago stopped worrying about the efficiency of the resultant machine language code and these languages would have been impossible to implement 30 years ago when these traits were important.
ReplyDelete