Recall that last week, we identified a broad range of areas to consider when we describe overall software quality. It is valuable to appreciate that correctness (or functionality) is just one of perhaps a dozen different attributes to look at for quality, even though in most projects this is the only perspective considered. Finally, we noted that it is important to start with a broad range of attributes (there are several similar ones to choose from) to avoid the possibility of missing an entire perspective of quality.
As noted last week, the landscape of quality attributes I tend to start with comes from Karl Wiegers (based on Software Requirements, 2nd Edition, and expanded based on his experience), and consists of the following: reliability, usability, integrity, efficiency, interoperability, robustness, safety, availability, flexibility, maintainability, testability, portability, reusability, and installability. The last 5 of these are of interest to the developer, the first 9 to the user. Safety and installability were added based on experience.
For any specific project, what is important across this landscape of quality will vary tremendously. Embedded systems may have critical efficiency requirements given limitations of space and power, but usability considerations may be completely irrelevant. Conversely, while handheld applications may see similar efficiency requirements to those of traditional embedded systems, usability will be an important consideration for adoption and differentiation from the competition.
An effective way of prioritizing the range of quality attributes is to tackle the problem in two stages. This makes the process efficient, but still protects you from missing any important elements (as may occur if you start out with an abbreviated list).
For these prioritization steps, it is critical that all stakeholder communities are involved in the discussion. A common mistake in requirements analysis is to decide on requirements by acting for a stakeholder that you don’t adequately represent, and it is very rare for an analyst to be able to authoritatively make decisions for the broad range of categories we are considering here.
The first stage is to consider each of the attributes in turn, and to determine if there is any current knowledge that would make that attribute clearly in or clearly out of scope. Not all of the attributes can be simultaneously optimized, and indeed, some of the areas of quality are at odds with one another (for example, a system that is build to accommodate high efficiency will inherently be less maintainable). Generally, what you will find is that at least half of all the attributes are quite clearly in or out, but the others will require some discussion.
It is these discussions, with appropriate stakeholder representation, that is important. While it may be appealing to skip this step by using a shorter list (based on the products you generally build), this results in a very small time savings but increases the risk of overlooking critical requirements.
As training examples of potentially surprising requirements, we often discuss a Cafeteria Ordering System, an online system that will allow employees to order food and have it delivered, to save time. While the criteria of safety may seem irrelevant to most that participate in the discussion, allergies can be a cause for concern, and drive requirements such as the need to post ingredients for all meals. Similarly, interoperability would appear to be easily handled through straightforward deployment on existing servers, but the growth of PDAs could generate requirements to handle a wide range of different devices. No attribute should be considered clearly out before the discussion takes place.
Once we have weeded out the attributes that are clearly not a concern, we can take the prioritization a step further. An approach that I have found effective is to build a matrix of the remaining attributes, and to perform a comparison of each pair in turn. For each pair, the group selects one as being more important than the other. This will result in an ordered ranking, and is easiest managed with a spreadsheet that can tally the totals as you go. Note that this approach is effective anytime you wish to rank a collection into an ordered list – I’ve used it to help select which movie we would see on a Friday evening, (which may explain my wife’s panic when I open Excel…)
Had we not performed the first culling of the list, this would become quite a tedious exercise. By chopping the list in half (which is typical), we reduce the number of comparisons by a factor of four. As with the first approach, you will find here that many comparisons are quite straightforward, and some will generate significant discussion. Again, we cannot know this in advance, or without reasonable representation.
When we are done this, we have narrowed the large list we started with, using two approaches, and are left with a subset of quality attributes that are ranked in overall importance for our specific project. We have done nothing to actually produce testable requirements at this point, but we are now ready to do so, and our narrowed focus will ensure that we build the appropriate requirements. This culling and prioritization takes little time in a focused session, and as with many problems that appear difficult, we are breaking it down into a series of steps, each of which is straightforward on its own.
Next week we will perform the critical step in the overall process of determining what quality means for our system. We will perform a translation from these attributes (that are designed to cleanly cover the overall space of quality) to a set of quality criteria that map to the attributes that are important to us (and are easily expressed in a quantified form). – JB