Generally, the suggestion that you should never argue with the data is a good one to follow, but there are clearly some caveats. There may be challenges with the measurer or the subject being measured, or both.
In scientific experiments, the instruments used for measurement are periodically inspected and calibrated. This ensures that the number being displayed or captured is indeed representative of what is being measured. In addition, in well controlled experiments, the subject is carefully managed to minimize the environmental noise, maximizing the signal and the overall value of the data.
In comparison, assessors, evaluators and any other human ‘instrument’ may have received an initial inspection resulting in some attained qualifications, and may even have periodic calibration in the form of ongoing maintenance of their credentials. They all have biases, however, that no certification program can completely remove. Some of these may be internal biases, some may be a function of the certifying organization. Who audits the auditor?
On the side of the subject, there are all manner of issues that we need to consider. If the measurement is something that can be taken without any conscious action by the subject (such as having one’s temperature taken), it can be relatively safe to assume that we have a reasonable level of objectivity. If, however, the data we are gathering is part of a survey or involves responding to questions of some sort, we have to be careful to consider what is going on behind the curtain, in the person’s mind.
As they answer the questions, they may have biases based on any number of drivers. Are they:
- Responding to demonstrate that they know the correct answer, or to ‘win’ the perceived competition?
- Racing through the question set to get their name into a draw, or merely to get back to work?
- Biasing the responses in a certain direction, or to be safe, responding out of fear of reprisal?
Or are they trying to respond with considered, thoughtful responses in order to objectively learn from the experience?
Any collection of data that has been gathered in an unmanaged, open interface should be subject to intense skepticism, for we have failed to address any of the potential biases of the subject. We just don’t know how they are looking at the questions. Open queries over the internet, for example, may generate a great deal of data, but how do we discern the signal from the noise?
We can easily gather a great deal of data to use, but we have to understand whether that data is useful. It needs to be relevant for the questions that I wish to answer, and collected in an objective, carefully managed form.
In his book Measuring and Managing Performance in Organizations, Robert D. Austin argues quite convincingly that the measurement process needs to be very carefully managed. Even the most defendable data gathering methodology can degrade over time as the respondents understand how the data is being gathered and what the data is being used for. This is done, consciously or not, to steer the results towards each person’s individual agendas.
Quantification can sometimes be a poor argument for validity, or even reasonableness. Without a disciplined, active and evolving approach for managing the sanctity of the data being gathered, it can become just another bucket of numbers. – JB