Members Only Logo  
XML

or Subscribe by Email by entering your address below:


Powered by FeedBlitz
Learn about Subscriptions Follow me on Twitter!

The topics discussed here grow out of the bread-and-butter issues that confront my consulting and software clients on a daily basis. We'll talk about prosaic stuff like Membership Management, Meetings and Events Management and Fundraising, broader ideas like security and software project management, and the social, cultural, and organizational issues that impact IT decision-making.

Powered by Blogger



Wednesday, June 20, 2007

Keys to success in technology projects

This week Jeff Atwood has a great post in his Coding Horror blog, excerpting from Steve McConnell's 1996 book Rapid Development. He quotes a list of "36 Classic Development Mistakes" that can doom your project. I've had to extricate teams (and I admit it, myself) from many of these over the years. Jeff and Steve are thinking about software development efforts, but any technology project is prone to these mistakes. Avoiding these mistakes are the keys to success.


Three biggies I've struggled with are:

Heroics
. This is probably the most common software mistake of all. A project looks like its going to need four more weeks of work. There are two weeks left to delivery date. Solution? The team works 16 hours days for 15 days. The result? The deadline needs to be slipped after all, a lot midnight code needs to be reworked, everyone is raw and ragged, and developer-user relations are frayed. Programmers are by temperament very prone to heroics, but users can encourage it by reluctance to modify deadlines when necessary. Key to Success: when schedules start to slip, accept reality and make adjustments.

Insufficient Risk Management. As the authors of "Waltzing with Bears" note, every worthwhile software project carries significant risk -- or it would already be done. Developers often gloss over risks due to what my wife Doria calls "the hot dog factor" - the assumption that nothing will go wrong because they are real hot dogs. And users hide from risks because they do not want to confront them, plan for them, or call them to their superior's attention. When a problem emerges, it blindsides everyone. Key to Success: you really do need to plan for those unpleasant possibilities.

Feature Creep. This one has be written about so often that it is easy to forget about - then you see it happening again. Users and developers put in a lot of hard work together to specify the desired behavior of the some new application. The as release date looms, there are more and more features user's can't live without. It may seem like its just correcting design shortcomings or adding flexibility. But release dates are missed, new bugs are introduced, and unintended consequences of these late change are discovered at the worst possible times. And when features do not seem to cost anything to users, all bets are off - there is no way to consider the return on investment of the development effort.

It's easy to blame feature creep entirely on the users. But developers can encourage creep if they do not have clearcut process in place for tracking and accepting user requests. And they're setting themselves up for last minute change requests if they don't check in frequently with users as they work to make sure they are on the same page.
The moral: You need a well-defined process, and supporting tools, for tracking and managing your user's requests.
image originally uploaded as: http://www.flickr.com/photos/nicohogg/344155950/

Labels: , ,

Monday, June 11, 2007

Estimating Programmer Time

I've had a few questions in my mailbox recently about this subject -- and as leader of a company that provides software development services, it's one that is dear to my heart. Our usual way of working with our clients is to estimate the time required for any requested programming task, and to guarantee we will come in within 20% of that estimate. So arriving at an inaccurate projection can really hurt.

The good news is we've gotten better at it over time; the bad news is that it is not easy, and there will inevitably be occasions when a task takes much longer than expected to complete. Here's what we've learned over the years.

1. The starting point - look at how often you underestimate and compare it to how often you overestimate. You aren't really surprised, are you? All developers have a tendency to underestimate how long a task will take. Programming provides one of the most clear-cut examples of that oft-stated law: "Everything takes longer than you think it will, even when you take this law into account." Understanding the factors that lead to this is the heart of becoming a better estimator.

2. The most important tool you can have handy when trying to estimate is a database of how long things have actually taken you in the past. If you do not track your time - your entire group's time - against specific development tasks, you should start doing that now.

3. Estimate in pairs. Your accuracy will go way up.

4. Here's the key: remember that the bulk of a programmer's time is not spent actually writing the new code, but in (1) figuring out how the existing program works, (2) determining where to make the change, (3) verifying that the change actually works, and (4) debugging any problems found. These are the times that are hardest to control, and are the most often overlooked: novice programmers habitually underestimate the likelihood that problems will emerge in testing.

5. To manage this, include estimates of "design and analysis" time, and "testing" time, and assume some time will be needed for debugging -- even in what seem to be the simplest modifications or enhancements. Remember to include time for defects found not just by the programming team, but by customers and end users after the item goes into production.

6. Be willing to revise the estimate after the initial few hours of the work. Perhaps the estimate was based on the idea that some new components could be dropped quickly onto a form. But it turns out the form is so crowded with widgets and gizmos that it needs to be completely re-structured. If good estimates are needed to track costs or delivery dates, programmers need to become alert to the fact that discoveries which will affect development timelines need to be reported immediately so other stakeholders can revise their expectations.
image originally uploaded as http://www.flickr.com/photos/aarongeller/360135019/

Labels: ,

Thursday, April 19, 2007

The Browserless Internet

It's not really a new a idea - here's a Network World article from more than six years ago talking about the idea. But I think we are going to hear lot's more about it real soon now. This week, Adobe Labs released the first public version of its much-heralded Apollo development environment, a cross-platform tool to build internet applications that live on the desktop, with all the additional solidity and security that can provide. Here's a video of a wee demo: a desktop application that manages e-bay auctions.

We've been thinking about such applications for over a year here: last May I wrote in this blog
The idea of the Internet and the browser have been welded together in our minds, but really the Browser is just one way to display the content we pull across the network. Developers are just starting to realize that what used to be thought of as desktop apps can access data and content from anywhere on the net just as a browser can.
I'm convinced that this technology - not Apollo in particular - but internet delivered desktop apps using the public net rather than your office LAN as their infrastructure - are going to bring a lot of power and security to non-profit applications in the very near future. The major development platforms have been working with components that handle internet data and document access for years now; browserless internet apps can be built in virtually any of the existing desktop development environments popular today.

So stay tuned.

Labels: , ,

Wednesday, March 14, 2007

Navigating complexity

Maybe I'm just thinking out loud here. In the last post I mused about the complexity of striving for simplicity in design. For every effort to simplify, you discover you have made another user's life more complicated.

Usability means several things.Tasks should be easy to learn, quick to complete, and hard to screw up. Data displays should be complete, uncluttered, and easy to comprehend. But even these straightforward goals can work against each other. Do we show less information on a page to make it easier to read, or do we show more, so users do not need to click or scroll? The jury is always out on this one - we once recieved an email asking us to use a larger font, allow more white space, and add several columns of information to a particular display.

At the heart of the problem are the complex business requirements that organizations create. I've written before about the importance of weeding out needless complexity. But non-profit staff can only go so far in eliminating requirements from government, insurers, the board, and the ultimately the nature of their work itself. So the software designer's task is to mask or conceal this complexity.

Here are some of the trade-offs I've run into as we work with our users to make software more usable.

Design Approach
It's simple because
It's not simple because
Context sensitivity: only show inputs and options when they apply.
The user does not find herself clicking buttons only to get a message like "You cannot Place a Hat on this Zebra" . In situations where zebra's do not wear hats, the button is hidden. For sessions where transporation is unavailable, the transportation link is hidden.

Users are never sure when a menu choice or option will appear. "There used to be a button for Putting a hat on the Zebra", they report.
The WIZARD Approach: have users perform tasks through a series of simple forms that step them linearly through the process.
Little or no training is required. The stripped down dialogs on each page come with instructions and are not threatening to the user.
Getting through the process requires many clicks. Worse, making a mistake requires the the user to click BACK repeatedly to find the page where the correction can be made. So work is taking twice as long.

NOVICE and EXPERT forms: Have stripped down pages for users performing simpler tasks and more complex pages for the advanced users.
Everyone wins. The simplified pages require no training, and allow less skilled users to perform a majority of possible tasks. The Expert pages provide access to all capabilities.

Everyone prefers the simplified pages but are constantly frustrated that the one other capability they need is not there.
User configuration: Let users configure the capabilities and elements available on their view of complex forms.
This is even better. Each user gets exactly what they need and no more.
This is even worse. Users need to learn how to configure the pages. Users remove items from their page they actually need. Support staff find their life has been complicated because each user is looking at a different page.

Strict Enforcement. Make it virtually impossible for the user to make a mistake by building in rules to enforce all data entry and setup requirements. If each customer MUST have a date of birth, for example, require it to be filled in for the form to be saved.
Users do not need to know all the decisions of management about data requirements. The system will enforce the rules and pop up messages to tell you what you need. It's what computers are for!
No one can get their work done. The system is constantly complaining that they cannot use this membertype or you must enter an employer. People are entering 01/01/2000 for everyone's birthday. The data is in terrible shape because people are forced to work around the system.

It seems no design idea is a panacea. Each must be employed judiciously as users and developers navigate the seas of complexity.

Labels: , ,

Friday, February 02, 2007

Testing more than the features

One issue you can never discuss too much is testing. How do you test a software application before turning users loose on it?

Dr Dobbs, one of my favorite software magazines for almost two decades, had a nice piece by Scott Ambler on test in the December edition. Scott has emerged as one of the principal evangelists of the agile development movement, and in this issue he discusses testing from an agile perspective. Like much current writing on testing in the Extreme Programming and Agile universe, he talks a lot about unit testing and the test-first strategy. This approach stresses writing automated tests for every bite-sized chunk of program logic. Automated unit testing can more than double the amount of code that needs to be written, but provides a mechanism for detecting when new code breaks old features. And don't you hate when that happens!

Ambler also stresses a new concept: what he terms investigative testing. Most of today's testing literature emphasizes testing the specified functionality. Investigative testing takes a different tack:
The investigative test team's goal should be to ask, "What could go wrong," and to explore potential scenarios that neither the development team nor business stakeholders may have considered. They're attempting to address the question, "Is this system any good?" and not, "Does this system fulfill the written specification?"
A blogger I've been quoting a good bit lately, Jeff Atwood, similarly advises us this week not to limit testing to confirmation of the specifications. His post on Lo-Fi Usability Testing takes off from a book he's recommending to all software designers, Don't Make Me Think, by Steve Krug. His point: the users' agreement that you've met their requirements has very little to do with their ultimate satisfaction with the application. The users, like the programmers, are too close to the project to be the best testers. The best bet is to rope in a few people who know next to nothing about your project.
Usability testing doesn't have to be complicated. If you really want to know if what you're building works, ask someone to use it while you watch. If nothing else, grab Joe from accounting, Sue from marketing, grab anyone nearby who isn't directly involved with the project, and have them try it. Don't tell them what to do. Give them a task, and remind them to think out loud while they do it. Then quietly sit back and watch what happens. I can tell you from personal experience that the results are often eye-opening.

Labels: , ,

Friday, October 20, 2006

How does a programmer spend his time?

I just read an interesting post about programmer productivity in Peter Hallam's blog. Peter is a developer for Microsoft; I found his blog via Jeff Atwood's Coding Horror blog. Peter is writing about how to make programmers more productive and suggests that all the emphasis on helping programmers write new code faster is misplaced, because programmers don't really spend much of their time writing new code. His estimate is that the typical developer spends about 5% of his time writing new code, 25% of his time modifying old code, and 70% of his time understanding code he needs to modify. Atwood's posted a nice graphic of this division of labor. Once you realize where the time is being spent, you realize that tools that speed up the writing of new code have very little impact on overall productivity - while anything that make old code more readable and understandable leads to big improvements. Peter uses this argument to suggest his employer is focusing on the wrong features in Visual Studio, Microsoft's flagship development environment.

The 5-25-70 task breakdown also explains why programmers so often make utterly unrealistic estimates of how long a task will take them. They estimate it as if they were writing a tiny application from scratch. But in actuality they are modifying or enhancing an application they need first to understand. You've seen those developer tool demo's where the salesguy writes an entire self-contained application from scratch in 30 minutes. Peter writes:
This does not even remotely resemble real world professional coding. The last time I had a coding project like that I was in college. Early in college. A much more representative task would be to send a coder an existing piece of code that they'd never seen, that was undocumented, badly written, badly architected and had several bugs. Then tell them to add a new feature while maintaining the existing behavior as much as possible.
I think anyone who has worked professionally on large applications will recognize this scenario. We just don't usually recognize its full implications.
Tagged: ,

Labels: ,

Monday, October 16, 2006

Mike Wyatt's Cone of Uncertainty

Anyone providing any sort of IT assistance to organizations encounters this problem: you've spent an hour or so discussing some emerging need with your users, when they ask,"So what exactly will you do to solve this problem? When will it be done? What's it going to cost us?" And you have no idea yet; you've barely scratched the surface. How do you answer?

Over the weekend I ran into the weblog of Mike Wyatt, who blogs about identity management solutions at Sun Microsystems. Last Monday Mike posted a piece about what he calls the Cone of Uncertainty model, and provides a tool that shows users how the level of uncertainty - uncertainty about requirements, technology, timeframes, and budgets - is steadily reduced as a project lifecycle unfolds.

image of cone, showing uncertainty decreasing with each step in impelemtation


Mike points out that failure to recognize the level of uncertainty by vendors, consultants, and users leads to unkept promises, missed deadlines, and cost overruns.
Even with good change control processes and governance procedures, what both the vendor and the customer think the project will be in terms of cost, time, and functionality at the beginning of the project and what it actually turns out to be at the end of the project will at times differ by a wide margin.
The Cone concept and accompanying graphic strike me as effective tools to educate user communities about the advantage of postponing firm ideas of budget and schedule until a suitable stage in the process. This in turn will lower the pressure on implementors to make promises they very likely will not be able to keep. The events on the horizontal axis of the Cone graphic should be replaced with the steps in your particular implementation methodology and trundled in to your first project meeting!
Tagged: ,

Labels: ,

Friday, August 11, 2006

Introducing Aptana

Here's another very promising open-source web development tool I've come across in my month of feverish web development - the Aptana IDE. Aptana is an integrated development environment for HTML/CSS/JavaScript projects, built on the Eclipse desktop. It's a major step towards bringing to web development the same kinds of tools we take for granted in java, Delphi, or C++.

Aptana provides
  • code-assist for html, CSS ,and JavaScript, including your own JavaScript functions and libraries.
  • Syntax checking in all three languages.
  • Support for user-developed macros ("actions" they call them) written in JavaScript to manipulate your code.
  • An outliner that shows you the structure of your HTML, CSS, or JavaScript file, and lets you jump right to an element or function.
Take a look at their little demo video. The product is still in beta, and indeed the developers consider it at version 0.2. What's glaringly missing -- though planned for a later release -- is a JavaScript debugger. This is a must have; it's absence means that in the meantime you'll want to use Aptana in conjunction with the SplineTech JavaScript Debugger, which I reviewed here a few weeks ago.

With an integrated debugger, Aptana will leave pure script debugging tools like SplineTech in the dust. And even without the debugger, this is a great editor to use for your next web-project. I found three problems in my CSS by simply openning the style sheet in Aptana. Aptana is available for Windows, Mac, and Linux. Download it and take a look.

Labels: ,

Thursday, August 03, 2006

J is for javascript....

Not a programmer? You might want to skip this posting. But as I've mentioned, I've been doing a good bit of coding these last couple weeks, updating an old web app we did for one of our clients. So it's what's on my mind.

If you do some web programming but you haven't tried the new Ajax techniques in your Web projects yet, you can find some great examples that demystify the whole thing in Ajax Hacks, by Bruce Perry. You'll find it's not rocket science, and after a brief learning curve you'll be creating web apps that are far more responsive than your old sites. But pretty soon find yourself lying in bed at night wondering about the best format to use to feed data to the web page.

Since XML puts the X in AJAX, and the J is for Javascript, the most obvious approach is getting the data as a textbook XML data file and using Javascript to display the data elements wherever you wish. But before long you'll get fed up with the tediousness of parsing individual data elements out of the raw XML feed.

XPATH promises a way out. Instead of using procedural code to travel down the XML tree, XPATH lets you address an element or list of elements with a string that looks like a directory path. For example, if you want to build a list all authors of all books with a publication date of 2000, you could reference them with a string like
/book[@date="2000"]/author

But here you run into browser problems - the Mozilla family and IE implement Xpath in entirely different ways. Mozilla has the more powerful implementation, but it is pretty complicated to use. IE has a much more straightforward take on it. Charles Toepfer has posted a nice cross-browser library that implements this straightforward approach to xpath for the Mozilla browsers.

Still seems pretty complicated? You might want to try JSON (javascript object notation). This little trick lets you transfer data in the same way you might define it within your script - for example

book {
title: "War and Peace";
author: "Leon Tolstoy";
language:"Russian";
}

Besides being fewer characters to transfer since the "tags" are not closed, it's trivial to parse in Javascript - just use the javascript eval function to turn the string into a javascript object! And the file is very easy for a human to read.

But JSON hasn't helped us with the formatting issues. An approach which does is one Christine Herron blogged about not long ago: using microformat-style xml as a data transfer mechanism. This is a format that uses html div and span tags to identify the data elements.Like so -

<div class="book">
<span class="title">War And Peace</span>
<span class="author">Leon Tolstoy</span>
</div>

The semantic information, as you see, is indicated by the element's class. You can just assign the entire XML string to a div's innerHTML, and let your stylesheet handle the formatting. Microformats were originally developed as a way to make web pages that would be easily scraped by other applications, but turning the technique into a data transfer mechanism seems sensible in some cases.The Rico effects library uses this as one of its standard approaches to Ajax data exchange.
Tags: , ,

Labels: ,

Monday, July 24, 2006

Programming is different

In my work, I alternate between periods where I am very involved in the coding of our programming projects, and periods where I am taken up with managing projects, consulting with our users, or talking to prospective clients. Each time I become immersed in the coding for a few weeks, I rediscover that programming is different than most other forms of work. A number of writers have commented on how programmers are different from other professionals - take a look at Bryan Dollery's Understanding the Psychology of Programming, for example. But I suspect it is the work, not the people, that creates the stylistic difference.

A case in point. Dollery says:
...programmers usually do have a longer attention span and a greater ability to concentrate than the majority of the population...
But I think its possible that the programming itself compels this form of attention. Programming has an addictive quality about it. For example, when I'm responding to an RFP, I'm very ammenable to breaking for lunch when my co-workers seem to be doing the same. But when I'm programming, I just wave them them on... seems I'd much rather figure out why the Next button isn't "greying out" on the last record anymore. And even when I am not at the keyboard, I find that it's very difficult to get my mind off the issues pending in the programming project.

It's not just that I'm willing to devote very long and intense periods to the programming - Once I am truly involved in the code I find it next to impossible to break my thought away from it. So during such periods I'll find I am thinking about my code in the middle of a dinner table discussion about an issue I'm normally quite passionate about.

In addition to the attention/concentration issue, other writers have commented that techies are motivated by different factors than other staff members, and have a tendency not to care about the overriding organizational motivation for the work they are asked to do. Watching my personality realign itself when I drift in and out of intensive technical work has helped me grasp the problems some managers have in leading the IT effforts. If you find yourself supervising techies at your organization and are not sure what makes us tick, you might pick up a copy of Leading Geeks: How to Manage and Lead the People Who Deliver Technology by Paul Glen, David H. Maister and Warren G. Bennis. Perhaps overstating the case just a little, these guys say that
Simply having geeks is not enough. They must be effectively integrated into the organization and focused on appropriate tasks... the future of your organization depends upon your ability to lead geeks effectively.

Tags: ,

Labels: ,