Michael Riddle's Thoughts » 2009 » October

Hacking and Consequences

October 1st, 2009

Once again, my site was hacked. The loss is in the comments that readers have added. If you have posted a comment, I’d appreciate it if you’d re-post.

Since BlueHost provides WordPress sites in a totally unsecured state, and denies any responsibility for security, I’ve had to waste a lot of time learning. I’ll slowly get better, but never perfect. At least now I have made full backups and tested them, so the next hack should be reduced to a short annoyance. Some short-term good has come of it.

People often fail to see the long range results of social misbehavior. Years ago, when many people started stealing music, the industry responded with unlivable DRM schemes and unconscionable lawsuits against parents. One result is that I will no longer buy music. Their response in the war was too aberrant to support.

Music thieves caused that response, and we all suffer for it. A result of not paying for music is less well-produced music being available, and more expensive concert tickets. If everyone’s work was available for free, how would we earn a living?

Blu-Ray DVD players take forever to boot, and regularly require software upgrades to support more DRM, getting ever slower. Nice going. There is always a price, and the people least able to fight back are always the ones who pay it.

Hackers (in the newer, unfavorable, sense of the word) are doing the same thing. The web is currently anonymous. This is essential for freedom and communication in many parts of the world. They should realize that that may eventually be taken away from us due to their actions. Ideals like freedom and the free expression of ideas are much less important to governments than the well-being of large corporations. Hackers should not think for a minute that it can’t be done. Of all of us, hackers should understand the limits of security. When their security is lost, along with them may be ours. So far, it has not reached a great enough pain point. One day, the web may be lost to individuals, and every web transaction will have a verified back trail. Not a good thing. Do we really want Digital Source Management?

Don’t believe me? In the U.K., they are proposing laws to cut off web access to illegal downloaders. Certain you can get around it? I agree for now. But it shows the direction in which you are driving the train.

Hack me and you hurt my dozen or so readers, and waste a bit of my time. I’m small time. The general destructive ethic however, will lead to a much less desirable world. I once long ago was a ghost hacker – changed nothing, just proved I could do it. Learned some skills. I no longer do it, even though we all can google lots of harmful scripts and tools.

I think the real challenge is to write constructive tools – things people can use to improve their lives. Destruction has always been easier than creativity, and thus entitled to less respect.

Posted in Uncategorized | Comments Off on Hacking and Consequences

Differences between good and great programmers

October 1st, 2009

It is widely known that there can be a ten or even hundred to one difference in the productive output of great programmers .vs. average ones. I’ve been trying to think of what some of the reasons might be, and how we can apply them to our own work.

The first thing I’ve come up with is that they are not intellectually lazy. They are willing to go the extra mile. I’m not talking about working hard in the sense of hours, or taking on difficult tasks. That goes with our territory. I’m talking about how we do our work.

This first became clear to me when I was thinking about debugging. Many programmer’s I worked with, and many I’ve hired, fix bugs the way Tylenol helps a cold – they relieve the symptom. When the location in the code responsible for causing visible bad effect is found, they insert a patch or work-around at that point, and the symptom disappears.

What very good programmers do is to use that as a wedge to pry loose a better understanding of how the program is actually working. They might invest an additional several hours, until the root conceptual cause, often the result of a structure or design decision, is understood. Then they fix the problem with that. The program moves towards better clarity: It becomes a better match for its design’s conceptual integrity.

Doesn’t that approach take more time? Certainly. In the short-term.

When viewed in hindsight, after a project has been successfully completed, it can often save a great deal of time. Debugging is expensive in both human time and psychological wear and tear – It is much more enjoyable writing code to implement new capability. By fixing the root cause of the problem, we have not plugged one symptomatic hole in the dam, but strengthened it entirely. The real bug would have surfaced in several other places, each requiring a symptomatic cure. The result when complete would be a patchy, hard to maintain, code base.

My personal point of pride is to attempt to fix bugs by removing lines of code. I try to find the cleanest way of expressing a function. The best bug fixes are fixes made during design time. The entire point of evolving code is that we realize that design, implementation, and testing are often best served by being overlapping concurrent tasks. (Lately this has been popularized as part of Agile Programming).

Fixing bugs during the design is especially effective because what we do as programmers is much less about coding that it is about taking a poorly understood process and reducing it to a clear vision. That is why it is so often the case that when we have finished writing a program, we understand it well enough to see how it could have been better accomplished (the “2.0 syndrome” aka “rewrite fever”).

postmortemsSome of my favorite programming articles are the game postmortem articles in Game Developer magazine. The book, Postmortems from Game Developer, contains a nice set of these articles. Even if you have no interest in graphics or computer games, Game Developer is a magazine I recommend. These people are the ones pushing the frontier of our trade, both with hardware and in parallel programming and complex project organization.

 
Other books on the subject of debugging I can recommend:

dbghandbookThe Debugger’s Handbook by J. F. DiMarzio, which talks about and gives examples of design-time debugging along with the usual material.

 
 
 
 
 
whyfailWhy Programs Fail – A Guide to Systematic Debugging, by Andreas Zeller. A deep dive into the sources of failure, and approaches to detecting each type of cause.

 
 
 
 
 
dbgthinkingDebugging by Thinking – A Multidisciplinary Approach, by Robert Charles Metzger. This book delves deep into approaches to take to debugging, and different ways of thinking about problems. Not the easiest read, but probably more information per page than any other book on debugging.

 
 

Posted in Uncategorized | Comments Off on Differences between good and great programmers

A library without books

October 1st, 2009

librarysansbooks.thumbnailI read this Boston Globe article yesterday, and it got me thinking. Replace a place where literally hundreds of people can survey available books, read them, bounce between them – with a $12,000 cappuccino machine, net outlets, and 18 E-Book readers. What are they thinking?

The web has been an amazing thing. It is a great resource when you know the question you want to ask. It’s not so good when you’re trying to learn what questions you should be asking -and that is where books, and libraries full of books, shine.

farenheight451This non-book idea disturbs me. It may not be ‘Fahrenheit_451′ [the 1953 Ray Bradbury novel in which books are burned], but the difference between that and making them “go away” is one of degree, not substance. Someday, books may be an “an outdated technology, like scrolls before books” as James Tracy said. But not yet. There are still many issues with electronic books.
Reading well is a process that engages the mind. When I read, I’m immersed in the book.

Today, many young people don’t read. They have the word-parsing skills, but they see reading as a slow way of obtaining facts or answering questions. They prefer watching videos [they say they learn quicker picking up the context visually]. I find watching videos [or live lectures] restrictive. I’m locked into a fixed sequence of presentation. It usually takes much longer, and gives me less knowledge [but perhaps too many facts]. In that respect, it is much like a PowerPoint presentation – fixed sequence. Low interactivity. It becomes easy to let your attention slip. Electronic ADD, if you will.

We need to reflect on the difference between facts, information and knowledge. Facts and information are datum points for thinking. Knowledge is what we gain by thinking about them. Reading (in the larger sense) helps us gain knowledge. Reading is an active skill where we think about what we are reading. Real knowledge is acquired much like peeling an onion a layer at a time. As we read, we descend into a deeper understanding. This iterative process is what SQR3 [Survey, Question, Read, Recite, Review] is about.

I learned to read well using the SQR3 technique. Essential to this is the first step – Survey. With a book, I study the Table of Contents, flip through the pages, stopping at interesting sections or illustrations. I build an overall picture of what is presented. Perhaps the book won’t be going where I wish, and I exchange it for another. [Remember the stacks of books on the table in front of students doing research?]

Today, I usually replace the ‘Recite’ step with Discussion. Many people have observed that their knowledge of a subject improves greatly when they try to explain it to others. I’ve found that doing so often helps me tie the pieces together, and it often leads to intuitive leaps in understanding.

When I read a novel, I expect it to move along at a pace similar to watching a movie – if it took me two weeks to read it, I’d lose interest or get distracted. When you read fast, a novel is more entertaining. On a screen, most pages are too wide to facilitate fast reading. Why don’t I re-size pages to fit a ‘newspaper column’ width? The ads and junk on the sides make this less than useful. And trying to read with animated ads for company is almost obscene.

I understand We’re at a weird point in time with technology. The E-book readers don’t have the visual clarity of paper, and they certainly don’t facilitate surveying a book. I know that in a few years, the visual concerns will be gone, and perhaps E-Books will be viable. I’m a bit concerned about reading one in the bathtub. What will US Customs do when I bring my E-Book reader back into the country?

What about privacy? Who has a record of my E-book downloads?

Which brings up another point. E-Books can’t be trusted to remain available. If I buy a book, I own it. I can keep it as long as I want. With E-Books, we have DRM. Recently, Amazon ‘recalled’ copies of George Orwell’s 1984 from reader’s Kindles. [They did not have the legal right to sell it, so they ‘unsold’ it].

DRM seems to encapsulate the idea that we never own our own copy of information ,we are just licensed for one specific use of it, and that license can be rescinded if the DRM License Server goes away.

I’m no Luddite. I like technology. I spend 8-10 hours a day in front of a computer screen. I program for a living. I love Google searches, Wikipedia, and online resources. The new forms of communication can be stunningly useful (as well as contributing to information overload). But I think both forms of information access have their strong and weak points.

I have often been frustrated trying to find again something I once found through a Google search.

Today, few people keep bookmarks. It’s easier to ‘re-Google’ than to scan a long linear [nested] list of text to find a link we want. But an item in last week’s top-ten might be at position 4000 this week. If the reason I’m searching for it is not to recall the exact information I first searched for, but rather to mentally ‘follow a link’ inspired by something else on the original page, I may never again find it.

A major problem people have with computers is they lose things – We are at a point where we now need and have desktop search tools, because it is very easy to lose things we cannot see. We need to realize that search is excellent for answering questions, but it is lousy for gaining perspective or surveying the landscape. Sites like Wikipedia really help here, but we lack a personal Wikipedia for our desktop.

One final point. Back in 1978, there was a 10-part TV series, Connections. It described our incredible (and growing) web of dependencies on technology. If we were to suffer a catastrophic loss, paper books might be one of our few available resources for recovery. It may be true of civilizations as well as apples – what goes up does come down. Just as we have seed banks, we need to preserve low-technology access to information.

Posted in Uncategorized | Comments Off on A library without books

TANSTAAFL and double-edged swords

October 1st, 2009

moonharsh.thumbnailOne of my favorite science-fiction authors, Robert Heinlein, wrote a book, The Moon is a Harsh Mistress, that concerns an artificial intelligence assisting in a revolution. In the book, characters use the acronym TANSTAAFL for “There ain’t no such thing as a free lunch”. The point being made was that you always pay for what you get, regardless of how it is buried in the presentation.

I grew up with constant reminders to beware double-edged swords: Tools that help us often turn and bite us if we are not careful how we use them.

Before I get back on my messaging hobby-horse, I’d like to consider some thoughts about our tools. I’ve already mentioned that I’m not too fond of interpreters or the thinking that “better hardware will make code cleanliness or efficiency unimportant” – I’d like to be the one to decide where I spend the power newer machines bring: new applications, better dynamic interactions, or the possibilities I’ve not yet thought of, and I do not want to be protected from myself.

One of my pet peeves is how poorly people understand floating point. Machines have become fast enough that there is no need to avoid its use, and we give lip service to dealing with rounding errors and the dislike of 2.00000000016 displays, but how many of us take the time to really understand it? In the dim dark prehistory before graphics, I tried my hand at compiler writing, which was not hard, but writing the run-time libraries was. I had to write an efficient software floating-point library, and later, when the 8087 introduced hardware floating-point to my world, make the compiler seamlessly use either. I’d recommend that anyone who writes code using floating-point read What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Another pet peeve is making measurements with tools we do not take the time to understand. A common profiling technique is to take a statistical sample – what part of my code is being executed every so many msec? This is not a surgical tool – it tells you in which direction you should be looking. Times obtained this way do not include operating system functions, so a graphical function implemented outside WIN32 may take 4x as long as one using WIN32, but when you actually check, including time inside the WIN32 call, it might take half the total time. Moral: Be sure of what you are actually measuring.

Using a high-resolution timer is better, but you’d better average a few thousand tests, because Windows is a preemptive system. Michael Abrash’s book, Graphics Programming Black Book gives a lot of practical examples of measurement-driven design, although the hardware he discusses is now out of date. You also might want to download his Ramblings in Realtime. He has consistently shared his insights on thinking clearly about design.

The other tool that needs understanding is just what our compilers do with our code, and that means understanding assembler. I wrote our company’s FastCAD product entirely in x86 assembler. I actually started with the old Selector:offset horror, which I recommend everyone ignore as an historical bubonic plague. Once the 80386 and WIN32 supported the “flat memory model”, assembler became an order of magnitude (or more) easier.

I don’t recommend that you write programs in assembler. Modern compilers are efficient enough to please even me, and I no longer create in assembler. But the insight it gives me helps almost every day. It lets me judge the cost of various techniques used in C++. I learned that switch statements are no longer awful, since they are now implemented in optimized release code as a very efficient binary search. In fact, the more different cases, the more efficient such a test is. I learned that multiple inheritance is a real horror and can kill efficiency, and also lead to issues of functional duplication that can hide bugs.

writegreatcode.thumbnailI’d like to recommend the book, Write Great Code, Volume 2, by Randall Hyde. It gives a gentle introduction and walks you through both PPC and x86 compiler code generation. It may well surprise you and help you get better performance from your code, not because you will make minor changes for efficiency’s sake, but because being able to “think small” while “designing big” will improve the design approach you take.

cppfor-gameprog.thumbnailAnother book I like, out of the thousands on C++ programming, is C++ for Game Programmers. This is a very pragmatic book, and it is not about learning C++, it’s about using it. There is a 2nd edition by a different author, Michael Dickheiser, and they are different enough that I like having both of them. I have never programmed a game, but I read most game programming books, and subscribe to gamedeveloper magazine. These are the people who are really pushing the envelope, and there is a lot we can learn from them that will help us write any kind of interactive program.

oldnewthing.thumbnailStill another book I’d recommend is Raymond Chen’s The Old New Thing. I’ve just finished this, and found it very entertaining in the sense of “So thats why it had to suck!”. But on a much more useful level, understanding the reasons for so many of the WIN32 annoyances helped me understand the problems Window’s authors, and I, face in a deeper way, and occasionally, he gives some good code tips, such as the quick and dirty way to use Uniscribe to get rid of the “not in this font, you don’t” ugly boxes. I always thought there was no quick and simple way to use Uniscribe. This won’t take a lot of your time, and if even one of his points helps, it will be worth the money.

Posted in Uncategorized | Comments Off on TANSTAAFL and double-edged swords

A quick tip for developers

October 1st, 2009

While I’m working on this week’s post, I thought I’d pass on a useful tip that I’ve not noticed used by other programmers I know. On my development machines I have two hard drives in the removable drive sleds – costs about $26 at Fry’s for the first one, and $13 or so for each drive-only sled. (Added note: I now use the Kingwin KF-2000-BK drive bays which take a SATA drive directly without the need for individual drive sleds. Don’t forget to connect the small fan to power with the provided cable).

Since smaller drives have gotten so cheap, I use one for each OS variant I have to test (about a dozen Windows and 6 or so Linux installs, as well as a DOS system (I still have customers using the DOS versions of our products). Much cheaper than having a lab full of machines. And the lab doesn’t get so hot. I’m able to use multiple machines for network development rather than tie them up on specific systems. The only exception is MAC OS-X development, but they’re such nice machines I don’t really mind – although I wish I could do the same for Leopard testing.

By using Acronis True Image, I can easily clone a drive in a few minutes – the fastest and easiest backup scheme I’ve tried. Gets around Outlook PST issues, and I can easily change states or try “forked” changes, without the Subversion overhead issues.

I have many programs that use various locking schemes, and I just put these and nothing else on a “quarantine” drive, so I’m not always having to plead with a vendor to refresh my license (think AutoCAD, SoftImage, etc.). Then my development machine images can change often, and when gunk piles up, its a simple matter to restore from a clean image, and just download the project code from Subversion, and I have a clean system much more quickly than doing things the hard way.

This scheme works best with SATA drives, which do not have master/slave jumpers to configure, so you can switch and duplicate to your heart’s content – just be sure to physically label the drives to assist in their proper selection for use.

Posted in Uncategorized | Comments Off on A quick tip for developers

Message-based systems

October 1st, 2009

squeak.thumbnailThe closest thing I have encountered to Brook’s silver bullet is programming with asynchronous message-based design. I’m not the man to know who invented anything in terms of who should get the credit, but Alan Kay is the man who has most influenced me with his work in this area. Among many other things, he coined the term, “Object-Oriented Programming” and worked with the team that developed SmallTalk, a from-the-ground-up message-based system, and a modern implementation of it, Squeak. My favorite quote of his is this:

“OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.” An interesting article that explores his thinking is at Phil Windley’s Technometria.

The reason I chose to point out the above is that C++ does not really fit the bill by that definition. It is good and useful, and I use it as my basic tool. But it not only enforces early binding, it actually (due to the fragile base class problem) leads to prehistoric binding – binding to last year’s code. Note that he does not include inheritance in his definition.

What leads me to point this out is that the problems I tackle include the need to allow after-release integration of new plug-in modules. I have products that need to support many such extensions. This has become common in today’s world. The gotcha is that when I change my code and release a new version, and if I have made any changes to a class from which 3rd parties have derived their own classes, all of those plug-ins also need to be upgraded. That is a real headache for many people.

Another problem with the C++ implementation of OOP is that we tend to use it to build a rigid hierarchy of objects. If we have vehicles, and then cars, and then a Lexus, and we have code to turn a heater on or off, we’re good to go. Until we decide we need to heat the garage. Then it’s back to duplicating function. (See OOP Component-structure fragility).

Consider an alternative: modules that link up and can dynamically be brought into play during the execution, and changed around or replaced during execution. Delegation, callbacks, agents: all of those complications completely disappear. The goal is the elimination of glue code. In our unobtainable nirvana, we would be writing 100% application-specific code.

I’ve been doing object-oriented programming in assembler for 20 years now. I don’t need a specific language to be able to use a specific design approach. I do need discipline. I liked assembler because it was definitely an “enough rope” approach. I stopped using assembler because compilers finally got good enough to suit me. It was never a religion. The important point is this: It is more how we think about our design than about the choice of language we use to implement it.

What I find in using a message-based design is that I can have near-perfect data encapsulation and implementation hiding, fluid structure, and very late binding (linking). I also find that it greatly simplifies multi-threading and network coding. I implemented my message-based approach in C++ because I don’t like interpretive environments – I want my CPU cycles, thank you very much.

The other part of my lead-in was the word asynchronous. It’s what frees us of user-interface modal locks, like blocking dialog boxes. Here is an example: We have a file-name picking dialog box. We start it with a message, and we tell the dialog object the object with which it should communicate. When a file name is picked, or the dialog is canceled, a message is sent to that notification target object. Nothing really needs to be suspended, because the resulting message is what triggers the next step. If we use it to load images into an image cache object, implementing multiple selection does not require extra callbacks or other weirdness – the file name dialog simply sends several file name picked messages.

This seems simple, and it is, but it has enabling results. If we have a worker thread process, we use a thread control object, which queues work for the thread, starting it when there is a message in the queue, and stopping it when there are none. We write one of these classes. If we want to make some function run in a threaded mode, we only need to communicate with it via this threading queue object, and ensure data locking (only one thread can be changing data at a time). Not easy, but much better organized. If we don’t create such a queue object, but communicate directly, then we are working synchronously. Dynamic run time choice.

We want to distribute some of a heavy work load – send the message over a network. Our code already knows how to wait on the results due to the asynchronous messaging design. We don’t need to rework our code to network-enable it. A network is just a slower message pipe.

The kicker is that these techniques have been around for something like 30 years or more. The only reason I can think of to explain why they are not current accepted best practice is that you lose most of the benefit if you graft the approach on to an existing system. It needs to be designed in at the core foundation level.

A good way to understand messaging is to play with Squeak. You will likely not use it to do your “real” applications, but it will change how you think. Also, Apple’s OS-X preferred language is objective-C, a messaging-based system. The GCC suite supports objective C and is available for all platforms.

Posted in Uncategorized | Comments Off on Message-based systems

Tool transparency

October 1st, 2009

paper1saI consider a pencil and paper to be the strongest competitor I have. Why? Because it has incredible tool transparency. When I use it, I do not have to think about the tool. My mind remains on my thoughts. If I’m taking notes, I’m not thinking about dialog boxes, the “right” order to do things, icons, etc. I can write without looking. When I need to represent relationships, I can draw them. Free-form. When I talk about tool transparency, it is this ability to keep my thoughts on what I’m doing, rather than what I am using to do it, that I call tool transparency.

A pencil disappears. Of course, we tend to forget the steep learning curve: It took us a few years to go from crayons that would not stay inside the outlines to (hopefully) readable penmanship and clear sketching.

Apple is famous for it’s systems being usable. Now there are things that I find really mysterious. I almost took my first G5 back to the store because, as a Windows geek, I didn’t realize that you opened the CD-ROM drive with a keyboard key that suggests “up” to me. Clarity, like beauty, is in the eye of the beholder. What is obvious to one person is mud to another.

As an example, my wife uses a Mac for email. She is a sharp lady, and the other agents in her office looked to her for help with computers. But when there is a problem sending an email, it automatically sends her work to another folder. ie: one of the lines of text on the left side. She does not see these as folders, and since she did not put it there, she worries that it disappeared.

Now we can say the tri-pane presentation of most email clients is “obvious”, but its like that pencil – we forget the time we put in until such things became obvious. We can no longer remember what our technical world looks like to someone who does not love it for its own sake, but just wants to get a job done. Putting our clients first means making things work the way they think, not the way we’d like them to think.

One of the joys of working on software, as opposed to hardware, is that we can accommodate different viewpoints. But too often, we pick the easy way out, and accommodate viewpoints that closely match our own. In doing so, we take off the table the possibility of making good tools for a very large number of people.

Several years ago, Microsoft came under strong criticism and some ridicule for a program called “Bob”. This was like the office paper clip on steroids, and it did look pretty annoying. But at least they were trying to approach things from a different perspective, and make something that non-computer types might find usable. It might have been a success if they’d gotten some of the better game design and graphics-artists involved.

The point I’d like you to consider is: What can we do to make our products transparent to our clients? Let’s not try to impress them with a 200-command, 100-icon, dialog based monster that will take them weeks to master. Let’s make it looks so simple that its seems that there is nothing there – “Where’s the beef?”.

Posted in Uncategorized | Comments Off on Tool transparency

When theory and reality collide

October 1st, 2009

Reality wins. Every time. Theory is our name for how we hope it works. I used to keep a sign over my desk that read “When theory and reality collide, reality wins”. I used it to keep my focus on the reality that we create tools for other people to use.

We all like to think we’re open-minded. The problem is, we each see a lack of it in others, but seldom in ourselves. Now I think there are many reasons besides ignorance for this. One observation I have about myself, and many other experienced programmers, is that we consider ourselves more likely to be correct about technical issues that most everyone else to whom we talk. We don’t consider it ego – it’s our actual experience. The problem is: it’s only true for a very limited problem domain.

What we need is a way to suppress our conceptual immune system – our automatic reaction to defend any point of view to which we have “bought in”. This is especially likely to occur when we deal with others who have their defenses set at DefCon1. If we are talking to a non-technical manager who feels the need to maintain his authority, our objectivity becomes toast.

photo_inmates.thumbnailOne reality we have all dealt with is that at the end of the day, we are the ones who have to make it work. In this sense, we are engineers. Reality matters. We’ve each learned the hard way that when presented with a bad design, we will be the ones to take the fall. So if, as Alan Cooper says, the “Inmates are Running the Asylum”, it’s because someone has to.

Now this book is probably the least pleasant book I have ever read, because Mr. Cooper’s persona for a programmer is a caricature of an antisocial nerd. But I read it, and I recommend it. Sometimes we need a dose of unpleasant medicine. That’s not to say that I agree with all, (or even most), of what he says, but that I think he has many points worth thinking about. Just ignore his return to the “waterfall” model, and imagine what happens if you put interaction design people inside the product design feedback loop.

The most essential part of making a negative-feedback system (in the engineering sense of the term) work, is the ability to accurately measure difference between the present and the desired state. If our programs are going to evolve in an organic sense, we must suppress our certainty, based on years of experience, with a willingness to listen to those who will use our programs on a daily basis.

Notice I mention the users – not the managers, sales people, marketing people, or others. They all have valuable contributions, if we will listen without judgment. But nothing can make a program succeed if the people using it don’t actually like it. These are the people who most often get left out of the design process.

I have two suggestions: Get some of your toughest clients and some of the least experienced ones, and discuss the design issues with them. I use a BBS for this, And I’ve found that customers willingly give their time if they feel you are actually listening. Then, get these people in an alpha-test loop (don’t wait for beta).

The second is to attend conferences that are not “new-product” or “new toys” in nature. Now I’ve been to a lot of these things, and eaten an awful lot of awful meals. But the one I keep returning to, year after year, is COFES – The Congress On The Future of Engineering Software. The joy of COFES is that there are so many really accomplished individuals, great thinkers, and people who honestly spend their mental lives “out of the box”, that there’s no more chance of not listening than there is of a child refusing to go to Disneyland.

Posted in Uncategorized | Comments Off on When theory and reality collide

Why does good software take so long to build?

October 1st, 2009

I’ve been growing a new program. It could be a CAD program, it could be a project management program, it might be an idea facilitator. Actually its all of this and more. It’s hard to put a label on it, because we’ve spent seven years thinking about how people work with CAD designs, and why CAD is so useless for that initial Eureka! moment and it’s “napkin-space” expression.

Categorizing a program is easy if it’s just another “me-to”. We all know and agree what it does. But if we go off into the wilderness and explore what kind of tools would be really useful, the result may not fit into existing categories. New thinking may yield new categories, and it usually takes more than one of something to see what about it makes it a category.

Joel Spolsky has written an excellent article on the time good projects take, Good Software Takes Ten Years. Get Used To It. He talks in his blog about the Chandler project, and how it has taken seven years so far. The observation that good software takes a long time has many examples – and the best ones always seem to involve innovative programs that really focus on user interaction and how users think about their work.

mmmonthbook.thumbnailOne of the most influential books I’ve ever read is Fredrick P. Brooks “The Mythical Man-Month”. While this book discusses the design and implementation of OS-360 back in the days of punch cards, it remains relevant today because it is much more about the nature and costs of communication within a design team. For over 25 years, I’ve asked every new employee of my company to read this book. The 20th-anniversary edition has four additional chapters, including the article, No Silver Bullet, which has inspired a lot of commentary. The book also includes his own re-evaluation of that article, discussing the prospects he sees for improvements in productivity.

After first reading the book, I decided to change how I went about making programs. I adopted an evolutionary approach of growing a program around a central skeleton. With this approach we always had a working program. We then mixed that with an engineer’s negative feedback loop: send copies to real-life clients and have them try to use it. Listen with an open mind, suppressing one’s conceptual immune system, and adapt the program. Repeat in as tight a cycle as possible.

Originally, sending floppy disks around the world resulted in a two or three week feedback loop. Things really took off with the internet, when we began almost-daily decision loops. To this day, this ability is what I most value about the internet – I can actively involve our clients in the design of the tools we are making.

Chapter 17, “”No Silver Bullet” Refired”, touches on this methodology. I’ve been using it for almost 30 years now, and it has enabled a very small team to make several generations of CAD programs that have satisfied our customer’s needs. One result of designing a program in this manner is that it requires radically less support – a major cost of a software product – since it was by the nature of the process designed to work the way our clients think about their work.

Posted in Uncategorized | Comments Off on Why does good software take so long to build?

Let me introduce myself

October 1st, 2009

I’ve been thinking about software systems architecture for over 35 years now. I wrote one of the first microcomputer CAD programs, Interact, which was the prototype for the first version of AutoCAD. Since then, my company, Evolution Computing, has published both EasyCAD and FastCAD.

I intend to discuss issues in software architecture, program ease of use, what we can do to improve our productivity as programmers, and the effect we have on the world we live in. I’ve also spent several years working with the Open Design Alliance to further CAD interoperability, and have some thoughts on the causes and the political nature of interoperability problems.

Posted in Uncategorized | Comments Off on Let me introduce myself