Software Quality Engineering
Why is Software Quality Still the Poor Stepchild of Software Development?
In the recent past, our organization reviewed the software quality planning for a very large, complex software implementation for a large organization. The implementation was being performed by a well-known consultancy, one which had successfully completed multiple such implementations in the past. The initial overarching project planning document was approximately 300 pages long; not more than 5 pages were devoted to testing and software quality.
Now, in all fairness, the document did refer to future planning tasks for testing cycles that were scheduled to be written. And some of them were, eventually.
The main point here, though, is one of emphasis. If, at the current stage in the maturity of the industry, software quality attracts so little attention at the beginning of a large project, and one implemented by a respected and successful organization, then it is clear that the case for investing in software quality has not been made to the industry.
Now, you might look at the recent history of the field, and conclude that all the companies that have been building their own software and have deployed with what, on paper, looks like insufficient testing, have “gotten away with it.” The economy has not faltered (at least, not because of software defects), and there haven’t been an overwhelming number of reports in the press about companies stumbling due to major glitches in their software. NIST, the National Institute for Standards and Technology, recently pegged the cost of inadequate software testing at $59 billion. Not chump change, but, you might argue, an acceptable cost in a trillion-dollar economy.
This is looking at it in the wrong way. Software quality investments should be looked at in the same way we look at insurance policies, because that is essentially what they are – mechanisms for reducing risk to the enterprise. Just as no individual can afford to drive without insurance because the personal risk is too high (even though society may be able to absorb the costs of the uninsured), no organization can afford to short on software testing – because the risk is too high. Anyone who views this from the perspective of organizational risk will “get” it – and the argument that the status quo has worked in the past falls flat from a risk perspective, because statistically, the longer one goes without a major failure, the likelier a failure becomes, not the opposite.
Posted by Jeff Bocarsly on Thursday, May 22, 2008 2:41 PM EDT
Leave a comment | View Comments (1)
Code Coverage? Why do it by Hand?
Code coverage is one of those items that is better done by computer, yet people still insist on doing by hand.
When it comes to code coverage – estimating how much of your code is exercised by your testing regime – there are a number of top-notch tools. There are commercial tools for most technologies and open-source or bundled tools as well (just Google “open source code coverage”). You might think that, with all the effort and expense that is put toward software testing, development organizations would use these tools to accurately measure what their testing efforts actually buy them.
Relatively few organizations do so, even when the tool is open-source and the only cost of ownership is the cost of installation, training and administration. The reasons are many: “Oh, our BA’s know the application so well, they can fully cover it,” or “The developers are handling that with their unit tests…,” or “We’re not there yet; we’re waiting until we shore up our core process.”
Well, maybe. Some reasons are better than others. Few organizations actually invest in even checking out whether there is an appropriate coverage tool for the technologies they are implementing with, or, if there is, what the cost might be. Some do invest in code coverage tools, and then only use them to measure the coverage of unit testing, but not the coverage of Integration or System testing. (Or UAT for that matter.) It is another area where the engineering and technology are mature and available, yet most organizations don’t take advantage of it. It is another example of how technology is ahead of process by leaps and bounds.
Posted by Jeff Bocarsly on Thursday, February 14, 2008 4:43 PM EST
Leave a comment | View Comments (0)
The Utility-Complexity Curve
As applications are designed and developed, they typically start out with a minimal feature set that is sufficient to garner enough market share to make the product viable. Then, the feature set is built out, either in response to the original vision, competition, or both.
Initially, as features are added, the application becomes more usable, in exchange for a relatively modest increase in complexity to the user. At this point, the Utility-Complexity curve is on a steep incline – with each release, the app becomes significantly more usable, for a modest cost in complexity for the user. However, this game doesn’t last forever – as more features are added, the app continues to become more complex, but the utility of the app doesn’t change by as much – the features being added speak to too tiny a segment of the user population, or are too trivial. At this point, the Utility-Complexity curve peaks and the intuitive character of the app has peaked.
Eventually, as more releases are designed and delivered, the complexity continues to increase but the utility of the application actually declines, as “feature-clutter” sets in. Subsequent releases produce a tool that is incrementally less useful to the user. The Utility-Complexity curve turns over and starts to descend.
Why does this happen to good applications with well-considered designs (at least, initially)? It happens for perfectly good capitalistic reasons - the need drive revenue by continually releasing new versions with new features. How can you convince buyers to pay annual maintenance or upgrade to a new version if you’re just releasing bug fixes and no new features?
So, every new product is on a path from an initial good, intuitive design to a maturity in which the intuitive nature of the original design is violated, where the app becomes less usable with additional features. Vendors continue to add features well beyond the optimized design for their products, and the products become incrementally less useful to the user. I wonder if anyone has tried to calculate the loss in worker productivity due to this sort of feature-creep.
Think about some of the most common applications that you have seen released with significant re-design or enhancement, especially where the changes require a great deal of re-learning by users. Are users 100% more productive with the new application? 50%? 10%? Most likely not. They are probably no more productive than they were with the earlier version, minus the re-learning, which puts productivity in the negative. Why did the vendor do it? Just to have a reason for people and organizations to buy again, and buy more.
This applies across the board - some tools on the market today are hitting or crossing the Utility-Complexity peak. They started out with nice, minimal designs, making them very intuitive and useful products. The initial phase of releases filled out the functionality in areas where things were a bit thin at the outset, and with the addition of features, utility and ease-of-use continued to rise. Somewhere in the more recent releases though, these tools have peaked. The new stuff is just getting in the way. Which means competitors will just come on the market to start the cycle again…
What do you think? Do you agree with my observations and prediction? Or do you have a different opinion?
Posted by Jeff Bocarsly on Friday, February 01, 2008 2:18 PM EST
Leave a comment | View Comments (0)
Educating the Business: It Ain?t Widgets
The biggest communications failure in modern business is not between marketers and customers, nor between management and workers, nor between businesses and their partners or investors, nor between regulators and businesses.
The greatest communications gap is between business resources inside an organization and the internal software teams that build custom software to support them. I say that this is the largest communication gap in the modern history of business because:
· it is pervasive, crossing multiple verticals
· it encompasses nearly everything that modern business does
· it affects business decisions virtually every minute of every day
This disconnect between software development teams and the businesses they support is the 800-pound gorilla sitting in the corner of every meeting and every discussion that occurs between technical software resources and the business resources that depend on them. Business types are from Mars and technical types are from Venus (or the other way around, if you like). They are, too often, ships passing in the dark.
The essence of the miscommunication is that business resources are inclined to think of a software development group as a widget factory, and the technical side in almost every organization has utterly failed to educate their business sponsors about the true nature of the beast.
The Widget Factory
Business folks are, by definition, trained to look at the world as a set of chunkable, repetitious tasks – and they are trained to write project plans and budgets around numbers of tasks and subtasks, each with an estimated time associated with it. Once they have all the tasks listed, and the time estimated for each task, they magically have a cost estimate on which to budget a development project. This is not a bad way to think – it works for much of economic life – the whole manufacturing sector uses it, and it applies to any area of the services economy where tasks are completely well-defined and repetitive. Every widget factory, I imagine, has this sort of predictability associated with it, so the projects for Widgets, Inc. always come out on time and at budget. This could work for software development, in principle, except for one small problem: it ain’t widgets.
The True Nature of the Beast
Alas for everyone involved in software development, the development groups, the support staff, the managers, and the business users, software development is largely a creative activity that just doesn’t work with the widgets model. And this is the key point that software teams fail to convey to the business side, and utterly fail to educate their business colleagues about.
However, businesses must run on budgets, and so everyone on the business side expects that the software side can work in a regime of predictability just like they do. And because business is the sponsor, the software side doesn’t like to say “no”, or “let’s strategize on this”. Instead, everyone plays along, produces a project plan, a set of dates, a set of effort estimates, and off they go again. As dates begin to slip, and hours worked rise, and deadlines loom, the same tempers that frayed last time fray this time.
Is There A Better Way?
In the most concrete, practical terms – probably not. Businesses need to run on budgets and project plans. Software development will only require more creativity as technology progresses, not less. Neither of these will change.
The thing that can change – the education part - can make a difference. All parties, software and business, will benefit if both sides are more aware of the nature of each others’ domains. It won’t change the stress associated with delivery deadlines, or the vagaries of trying to build a project estimate. But it can create an environment of greater understanding between the two sides that will smooth the path when the going gets rough. And the burden of educating the business side about how software is built – that is on the software side.
Posted by Jeff Bocarsly on Monday, December 24, 2007 10:51 AM EST
Leave a comment | View Comments (0)
O Ye Economic Buyers of Tools ... Caveat Emptor
It is remarkable to me how many people (read: managers) involved in software quality efforts don't really consider the proposition of investing in an automated test tool for their organization with any great care. The most discerning (and usually the most technically aware) actually do pursue the purchase with due diligence, of course, but those are not the folks I'm thinking about. Often, we encounter the manager who just "bought the industry leader" without inquiring whether it really is the best tool for his/her problem, and without even seeing a demo of the tool. Just a pure marketing-buzz buy. The next step up is the buyer who 'gets it' enough to request a demo of the tool, but then decides to buy based on the demo (performed on the vendor's home-built application target, designed by the vendor's marketers to show just what the marketing squad wants the potential customer to see, and nothing else). This buyer is wowed by the demo, and never askes for the obvious: a proof-of-concept in their own environment, on their own target application. Why sales like these happen so often is a bit of a mystery to me. Why don't buyers ask for the gold standard before buying (does it work on MY application in MY environment?)?
Well, risky as it is to try to explain behavior I truely don't fathom, I'll throw caution to the winds. I think it is a combination of two things, both thoroughly American. The first is the idea that you can get something for nothing. A more American idea there never was. Everyone is brought up to look for a 'deal', a 'sale' and a 'great buy'. Its what makes people give mortagages without verifying income and what makes other people take mortgages without reading the fine print. Its what makes our economy go 'round. The second is the visceral American reaction to technology, better known as the gee-whiz factor. The cooler technology can be made to look (regardless of what is under the hood), the greater its market cache. That is why iPods fly off the shelves, even though my music afficionado friends tell me it is inferior to some competing products. Put both of these together, and you have a marketing knockout-punch, at least for lots of folks involved in software quality.
For my two cents, I'd rather buyers think this one through a bit more. That may sound counterintuitive from someone who makes his living by implementing with these tools - the poorer the tool fit to the target application and project, the more work (for us) to make it work, you might think. And you'd probably be right about that. But the better the fit, the better the better the overall quality of the job. We can produce better, more robust, more effective automation projects with better tools and better fits between tool and project. So...automation tool buyers...caveat emptor!
Posted by Jeff Bocarsly on Thursday, August 16, 2007 1:46 PM EDT
Leave a comment | View Comments (0)
Why isn't it a standard that all components have interfaces for automated testing?
Why isn't it a standard that all computing environments and components have interfaces for automated testing?
Where is the engineering sense?
Anyone who has worked at building automated test harnesses knows that while many of the common target environments and UI components work with test automation tools, many still don't. And, if you go back just a few years, even fewer worked.
Building software is supposed to be an engineering practice and process. Software quality should be also. Why isn't an interface that is optimzed for automated testing a standard part of every product design? And why isn't the need for a testing interface part of every computing curriculum in the country, so that engineers are trained to include them as a matter of course?
Well, you might say, it's getting better. And, in fact, it is getting better. With Java, dotNet and Web computing ruling the day, all of the major automated software testing vendors have provided solutions for dealing with the standard UI objects in these environments (and most of the third-party objects too). (Never mind that some of them charge you
extra for access to these standard environments.) For web applications, the best-supported browser is IE, which makes sense because it is the de-facto standard in the business world. There already is support for other up-and-coming browsers (IBM Rational's tool, Rational Functional Tester, also supports FireFox as of its most recent release, as does Hewlett-Packard's QTP), and if the browser market splits in a serious way, more automation tool vendors will support more browsers without a doubt. So things, you might say, don't look so bad.
Furthermore, for the middle tier and back ends, access has been around for a long time - you can hit your databases directly from your automated testing tool, and your SOA web services as well, and message queue products likewise have useful interfaces for most test tools. Automated data verification for all of these components has been attainable for a good long while.
So what's my problem? Well, I kind of feel like all these nice things that have been happening for test automation and software quality in the last few years are accidental. It never seems to be a deliberate part of the design strategy, which means that next week, or next year, the next new-new thing might be a step backwards for test automation and
software quality engineering.
Even now, there are still many popular components out there where ease of automated testing is clearly not part of the design, not part of the object model, and not a strong interest of the vendor. So even if you can hook the object (and you often can), if the model doesn't expose the object's data, you're still in a tight spot for implementing test
So, where is the engineering sense in all of this? Seems like a no-brainer that incorporating an automated testing interface should be part of how software is designed, and part of the decision when software tools are purchased. It should be a good selling/marketing point for component vendors (some do play up this side, but way too few...), and it shouldn't be too hard to implement. My guess: it won't happen until a virtuous cycle develops in the market - component vendors will have to come to view quality as a selling/marketing/competitive point, and purchasers will have to expect vendors to deliver features that enable and support quality engineering. Don't hold your breath.
Posted by Jeff Bocarsly on Thursday, June 07, 2007 2:39 PM EDT
Leave a comment | View Comments (2)