This friday I commented on a Knowledge@Wharton article that presented the study “Different knowledge, different benefits: toward a productivity perspective on knowledge sharing in organizations” by Martine R Haas and Morten T. Hansen. It’s evident that I didn’t either share the study’s conclusions or appreciate their presentation, but after such a rant, I had to look at the underlying paper to either confirm the spontaneous criticism or withdraw it. I couldn’t believe such people had come to those conclusions.
After processing the study itself, I must admit that I’m impressed by the design, drive and work of it. It is a very useful step toward analysing the relative impact of different types of knowledge sharing in organization’s productive tasks. I also confirm my opinion that it’s flawed.
The roots of the issue
As a management consultant for a large ITC multinational for the last ten years (i.e. directly and seriously competent re the type of activities subject of the study), and occasional knowledge manager, I can’t help raising to the bait. Since I doubt I’ll have the chance to discuss the issue with the authors, I’ll at least sketch the reasons in case someone wishes to take up their call and work on improving the methodology.
IMHO, the flaws are essentially three:
- A debatable (and variable) definition of “document quality”, which impairs the validity of conclusions.
- A debatable (and incomplete) catalogue of knowledge sources, which skews the conclusions.
- A narrow focus on a very specific type of task, that is not quite representative of anything outside the sales department, and especially different from the needs of Operations.
Other frail points include the “process” concept, a lack of accounting for the cost of “production” of the knowledge reused, and a lack of control of the key issue: management quality and consistency of the knowledge sharing practices.
1. What is quality?
The study uses two types of “quality”, which it defines differently: work quality, as a measure of the resulting proposal’s satifaction of expectatives; and content quality, which it refuses to define in depth (“the rigor, soundness and insight of the knowledge conveyed (…) irrespective of the task at hand”).
Information quality is usually defined as its “timeliness, relevance, and accuracy”. Alternatively you can use the coherent one and ask whether it satisfies expectations of the sales team that accesses it. An inmanent definition, as the study uses, is not coherent and will cause inaccuracy.
Also, the study makes an omission. If the quality of a proposal is the satisfaction of expectations, it should be exactly the same as its success rate… if you measure the expectations of the relevant party: the prospective customer. There may be unwinnable cases and political issues, but outside those distortions, there can only be one relevant quality measure of a sales pitch: did it win, or not? If customer opinions were available as a source of data, this could be different, but in the absence of it there is no way to judge the way it satisfied their expectations.
If we want to consider also internal quality (satisfaction of internal expectations) we must perforce incorporate the measurement of the concordance with internal procedures, rules and templates as defined in the quality management documentation of the firm (ISO certification causes a lot of these). If we do that, we will find that the reuse of materials lineally (or almost) increases this measure of work quality. The study completely ignores this.
The study data-gathering methodology is essentially the classic method for evaluation of KM initiatives: a unilateral (customers are excluded) survey of a single firm. This makes it doubly important to screen for subjectivity.
2. Types of knowledge flow
The study only recognises two types of flow: personal direct counsel (face to face, or through mail or phone), and documentation. This is seriously incomplete, even if we exclude anything outside the firm. A wider vision can be seen here:
Direct personal participation of experts in a sales pitch can be divided in two types (not wholly incompatible, but functionally independent). The expert can be a “door opener” (a partner at the firm who does a presentation and gives a warm handshake thus providing serious advantage to the proposal by improving the perception of competence) or a domain expert who really participates in the proposal, either with counsel or with actual direct input. Both of those figures have serious cost, but not the same at all; and they don’t affect the project in similar ways, relative to the three variables of time, quality and perception of competence that the study chooses to measure.
Furthermore, the electronic documentation can be divided at least in three groups: reusable references and templates (elements designated for reuse, or the closest available thing), reusable procedures and methodology (the corporate practice, as expressed in formal documents or captured in prior proposals), and background and context documentation (market analyses, domain studies, and other highly formal and case-independent materials). Each of the three has very different “processing costs” in a proposal, and very different impacts on the perceived “quality” of the resulting work, not to mention seriously different building costs. The study quotes other elements (“pieces of code”, even) that hardly figure in a sales proposal.
Finally, as ranted about last Friday, the study completely ignores less formal flows that have serious impact on most projects (although usually less so on sales pitches): computer mediated conversations and less formalised documentation. In other words, conversations assisted by forum tools (or equivalent methodology) that allow low-cost help to flow from domain experts to sales teams, and lower the “processing cost” of documents by providing insight and context; and informal documents such as wikis and document and project blogs, where domain or technology content can be captured with very low cost to the corporation. Both of these types of tools lower the teams processing cost of information, but they also lower the cost of making that information available. They can rarely substitute for the other types but make impressive value catalysts.
The following figure represents the different (main) types of source, and their relative cost of production and processing, according to my observations.
And the following one attempts to differentiate the effects (potential for improvement, provided good management) of each tool on a sales process, separated in the three variables that the study tracks. As can be seen, they’re not the same:
3. What is a typical project?
The study centered in projects (a very specific part of a company’s activity, characterised by the variability of key parts of the task) as opposed to processes (reiterative execution of most similar tasks). This skews the study, because the value of references and methodologies is a direct function of their reusability, which is necessarily lower in this type of activities. The study itself mentions the issue, but not its consequences.
In other words, the study underestimates the cost advantage of using organizational knowledge (and especially the more formalised types).
But even within projects, the sales proposals are peculiar: they are short-lived and require just a description of the methodology to be implemented during the (eventual) execution of the project. In other words, the cost advantage of having those detailed methodologies available is much lower than the advantage of such background information as analyses and studies that help to adequate the proposal to customer needs. That is not the case during the execution of a consulting or software development project.
The impact of different types of knowledge on different types of project (or process) is different. Notably the less presential (and less costly to generate) types, those that were not reflected in the study, increase in importance in implementation projects. In other words, the relevance of the study is hampered because it does not contemplate this variety.
In a nutshell
This being a blog, I don’t think it’s worth going into much more detail. My copy of the study is overwritten in nuances and commentary, but it would not be relevant to go over it all. The nub, anyhow, is that each of the hypotheses of the study has weaknesses, the model does not have enough complexity, and the use of inmanent (subjective) quality impairs comparisons.
There are other comments that could be relevant. It is a very significant part of the study that the level of commitment from outside experts, as well as the quality of the documentation “processed” for incorporation, impact on the quality and cost of the proposals. Of course they do. But it is important to stress that selecting the appropriate materials to build reusable documentation, guaranteeing its level of accuracy and applicability, and building the right incentives and internal pricing mechanisms to favour the right level of collaboration… is simply called “good management”, or even solid common sense. It has nothing to do with the type of knowledge used, but with the consistency and good sense of the internal knowledge management policies and strategy.
I would be willing to bet that studying the impact of that variable would suggest that it is the sigle largest factor in the usefulness of knowledge sharing.
The study is, therefore (and in my opinion), a valiant but seriously flawed start in a very useful direction. As the authors recognise, the results themselves should be taken with a pinch of salt.